You are on page 1of 3266

Contents

Azure SQL Documentation


Azure SQL
What is Azure SQL?
Migrate to Azure SQL
Shared SQL DB & SQL MI docs
Billing options
vCore purchasing model
Azure Hybrid Benefit
Reserved capacity
Service tiers
General Purpose
Business Critical
Shared concepts
Feature comparison
Multi-model features
In-memory OLTP
Temporal tables
Scale up / down
Read Scale-Out
Distributed transactions
Scheduled maintenance
Maintenance window
Configure maintenance window
Maintenance window FAQ
Advance notifications
Security
Overview
Best practices
Security controls by Azure Policy
Security baseline
Always Encrypted
Microsoft Defender for SQL
Advanced Threat Protection
Data discovery and classification
Dynamic data masking
SQL Vulnerability Assessment
SQL Vulnerability Assessment
Vulnerability Assessment rules
Vulnerability Assessment rules changelog
Store vulnerability scans in storage
Logins, user accounts, roles, and permissions
Azure AD Authentication
Configure Azure AD auth
Multi-factor Azure AD auth
Configure multi-factor auth
Conditional Access
Server principals (logins)
Service principals (Applications)
Directory Readers role
Azure AD-only authentication
Azure Policy for Azure AD-only authentication
User-assigned managed identity
Transparent Data Encryption (TDE)
Overview
Bring Your Own Key (BYOK)
Managed identities with BYOK
Business continuity
Overview
High availability
Backup and recovery
Automated backups
Accelerated database recovery
Recovery using backups
Long-term backup retention
Monitor and tune
Documentation
Overview
Intelligent Insights
SQL Analytics
SQL Insights (preview)
Overview
Enable
Alerts
Troubleshoot
Automatic tuning
In-memory OLTP
Extended events
Extended events
Extended events - event file
Extended events - ring buffer
Shared how-to's
Connect and query from apps
.NET with Visual Studio
.NET Core
Go
Node.js
PHP
Python
Ruby
Business continuity
Configure temporal retention policy
Configure backup retention using Azure Blob storage
Security
Azure AD Authentication
Create Azure AD guest users and set as an Azure AD admin
Assign Directory Readers role to groups
Enable Azure AD-only authentication
Enforce Azure AD-only authentication using Azure Policy
Create server with Azure AD-only authentication enabled
Create and utilize Azure AD server logins
Configure TDE with BYOK
Always Encrypted
Use the Azure key vault
Use the certificate store
Monitor & tune
Identify query performance issues
Troubleshoot performance issues
Batching for performance
Load data with BCP
Application and database tuning guidance
Use DMVs to monitor performance
Log diagnostic telemetry
Azure Monitor for SQL Database
Azure Monitor for SQL Database reference
In-memory OLTP
Configure In-Memory OLTP
Try in-memory features
Monitor In-memory OLTP space
Load and move data
Import a database from a BACPAC file
Export a database to a BACPAC file
Move resources to a new region
Load data with ADF
Develop data applications
Overview
Working with JSON data
Use Spark Connector
Use ASP.NET App Service
Use Azure Functions
Use Azure Logic Apps
Index with Azure Cognitive Search
Server-side CLR/.NET integration
Java
Use Java and JDBC
Use Spring Data JDBC
Use Spring Data JPA
Use Spring Data R2DBC
SQL Database (SQL DB)
Documentation
Overview
What is SQL Database?
What's new?
Try for free
Quickstarts
Create database
Azure portal, PowerShell, Az CLI
Hyperscale
Bicep
ARM template
With ledger and digest storage
With user-assigned managed identity
Configure
Server-level IP firewall rules
Database project in a local dev environment
GitHub Actions
Tutorials
Design a database
Design database using SSMS
Design database using .NET
Business continuity
Add db to failover group
Add pool to failover group
Configure security for replicas
Geo-distributed application
Active geo-replication
Security
Always Encrypted with secure enclaves
Configure security
Create users using service principals
Rotate TDE BYOK keys
Remove TDE protector
Move data
Migrate using DMS
Set up SQL Data Sync
Migrate SQLite to serverless
Scale out
Configure security for named replicas
Concepts
Single databases
Elastic pools
Logical servers
Serverless
Hyperscale
Overview
Hyperscale architecture
FAQ
Replicas
Replicas FAQ
Purchasing models
Overview
vCore model
DTU model
Connectivity
Connectivity architecture
Connectivity settings
Local development
Local dev experience
The Azure SQL DB emulator
SQL Database Projects extension >>
The mssql extension >>
Security
Always Encrypted
Always Encrypted with secure enclaves
Configure and use Always Encrypted with secure enclaves >
Plan for Intel SGX enclaves and attestation
Enable Intel SGX
Configure Azure Attestation
Azure SQL Auditing
Audit log format
DNS aliases
Ledger
Ledger
Ledger overview
Database ledger
Updatable ledger tables
Append-only ledger tables
Digest management
Database verification
Ledger limitations
Network access controls
Outbound firewall rules
Private Link
VNet endpoints
Server roles
Business continuity
Active geo-replication
Auto-failover groups
Outage recovery guidance
Recovery drills
SQL Data Sync
Overview
Data Sync Agent
Best practices for Data Sync
Troubleshoot Data Sync
Database sharding
Database sharding
Elastic transactions
Elastic queries
Elastic client library
Shard maps
Query routing
Manage credentials
Move sharded data
Elastic tools FAQ
Glossary
Resource limits
Logical server limits
Single database resources
vCore resource limits
DTU resource limits
Elastic pool resources
vCore resource limits
DTU resource limits
Migration guides
From Access
From Db2
From Oracle
From MySQL
From SAP ASE
From SQL Server
Overview
Migrate
Assessment rules
How to
T-SQL differences
Plan and manage costs
Connect and query
Connect and run ad-hoc queries
Azure Data Studio
SSMS
Azure portal
VS Code
Connect and query from apps
.NET with Active Directory MFA
Java
Java with Spring Data JDBC
Java with Spring Data JPA
Java with Spring Data R2DBC
Develop locally
Set up local dev environment
Create a database project
Publish a database project to the local emulator
Manage
Management API reference
DNS alias PowerShell
Manage file space
Use Resource Health for connectivity issues
Migrate DTU to vCore
Scale database resources
Scale pool resources
Manage pool resources
Resource management in dense elastic pools
Manage Hyperscale databases
Hyperscale performance diagnostics
Block T-SQL CRUD
Azure Automation
Elastic jobs (preview)
Job automation with elastic jobs
Configure jobs
Create and manage (PowerShell)
Create and manage (T-SQL)
Migrate (from old Elastic jobs)
Secure
Audit to storage account behind VNet or firewall
Configure threat detection
Configure dynamic data masking
Create server configured with UMI and customer-managed TDE
IP-based firewall
Ledger
Create append-only ledger tables
Create updatable ledger tables
Convert regular tables into ledger tables
How to configure automatic database digests
Verify ledger database for tampering
vNet endpoints - PowerShell
Business continuity
Configure backup retention using Azure Blob storage
Create auto-failover group
Configure geo-replication - Portal
Configure security for geo-replicas
Performance
Use Query Performance Insights
Enable automatic tuning
Enable e-mail notifications for automatic tuning
Apply performance recommendations
Create alerts
Implement database advisor recommendations
Stream data with Stream Analytics
Diagnose and troubleshoot high CPU
Understand and resolve blocking
Analyze and prevent deadlocks
Configure the max degree of parallelism (MAXDOP)
Load and move data
Migrate to SQL Database
Manage SQL Database after migration
Import/export (allow Azure services disabled)
Import/export using Private endpoints
Import a database from a BACPAC file
Copy a database within Azure
Replicate to SQL Database
Replicate schema changes (Data sync)
Database sharding
Upgrade client library
Create sharded app
Query horizontally-sharded data
Multi-shard queries
Move sharded data
Security configuration
Add a shard
Fix shard map problems
Migrate sharded database
Create counters
Use entity framework
Use Dapper framework
Query distributed data
Query vertically partitioned data
Report across scaled-out data tier
Query across tables with different schemas
Design data applications
Authenticate app
Design for disaster recovery
Design for elastic pools
Design for app upgrades
C and C ++
Excel
Ports - ADO.NET
Multi-tenant SaaS
SaaS design patterns
SaaS video indexer
SaaS app security
Multi-tenant SaaS sample application
Wingtip Tickets sample
General guidance
Single application
Database per tenant
Disaster recovery using geo-restore
Disaster recovery using database geo-replication
Multi-tenant database
Deploy example app
Provision tenants
Monitor database performance
Run ad-hoc queries
Manage tenant schema
ETL for analytics
Samples
Azure CLI
Samples overview
Create databases
Create single database
Create pooled database
Scale databases
Scale single database
Scale pooled database
Configure geo-replication
Single database
Pooled database
Configure failover group
Failover group
Single database
Pooled database
Database back up, restore, copy, and import
Back up a database
Restore a database
Copy a database to a new server
Import a database from a BACPAC file
Azure PowerShell
Azure Resource Manager
Code samples
Azure Resource Graph queries
SQL Managed Instance (SQL MI)
Documentation
Overview
What is SQL Managed Instance?
What's new?
Resource limits
vCore purchasing model
Frequently asked questions
Quickstarts
Create SQL Managed Instance
Azure portal
PowerShell
Bicep
ARM template
Create instance pools
With user-assigned managed identity
Configure
Service-aided subnet configuration
Public endpoint
Minimal TLS version
Client VM connection
Point-to-site connection
Long-term backup retention
Load data
Restore sample database
Tutorials
Migrate using DMS
Configure security
Add instance to failover group
Migrate on-premises users and groups
Transactional replication
MI pub to MI sub
MI pub, MI dist, SQL sub
Migration guides
From Db2
From Oracle
From SQL Server
Overview
Migrate
Performance baseline
Assessment rules
Concepts
Connectivity architecture
Auto-failover groups
T-SQL differences
Transactional replication
Managed Instance link
Instance pools
Data virtualization
Management operations
Overview
Monitor operations
Cancel operations
API reference
Machine Learning Services
Overview
Key differences
Quickstarts
Python
Run Python scripts
Data structures and objects
Python functions
Train and score a model
Deploy ONNX models
R
Run R scripts
Data types and objects
R functions
Train and score a model
Tutorials
Python
Ski rental (linear regression)
Categorize customers (k-means clustering)
NYC taxi tips (classification)
R
Ski rental (decision tree)
Categorize customers (k-means clustering)
NYC taxi tips (classification)
How to
Data exploration and modeling
Python
Data type conversions
Python to SQL
R to SQL
Deploy
Operationalize using stored procedures
Convert R code for SQL Server
Create a stored procedure using sqlrutils
Predictions
Native scoring with PREDICT T-SQL
Package management
Install new Python packages
Install new R packages
Administration
Monitor
Security
Give users permission
Features
Linked servers
Service Broker
Database mail
Security
Always Encrypted
Auditing
Secure public endpoints
Server trust groups
Windows Auth for Azure AD Principals
Overview
Implementation with Kerberos
Setup summary
Set up the modern interactive flow
Set up the incoming trust-based flow
Set up managed instances
Run a trace using Windows Auth
Troubleshoot
How to
How-to documentation
Connect applications
Job automation with SQL Agent
Configure settings
Customize time zone
Configure connection types
Create alerts on SQL MI
Configure threat detection
Configure networking
Determine size of SQL MI subnet
Create new VNet and subnet for SQL MI
Configure existing VNet and subnet for SQL MI
Configure service endpoint policies for SQL MI
Move SQL MI to another subnet
Delete subnet after deleting SQL MI
Configure custom DNS
Sync DNS configuration
Find management endpoint IP address
Verify built-in firewall protection
Migrate
Database using Log Replay Service
TDE certificate
Managed Instance link
Prepare environment for link
SQL Server 2016 prerequisites
Replicate databases in SSMS
Replicate databases with scripts
Failover databases in SSMS
Failover databases with scripts
Best practices
Configure business continuity
Restore to a point in time
Monitor back up
Auto-failover groups
Create failover group
Manually initiate a failover
Samples
Azure CLI
Samples overview
Create SQL Managed Instance
Configure transparent data encryption (TDE)
Restore geo-backup
Azure PowerShell
Samples overview
Configure transparent data encryption (TDE)
Azure Resource Manager
Code samples
SQL Server on Azure VMs
Documentation
Documentation
What's new?
Windows
Overview
What is a SQL Server VM?
SQL IaaS Agent extension
Quickstarts
Portal
PowerShell
Bicep
ARM template
Concepts
Business continuity
Overview
Backup and restore
Azure Storage for backup
Availability group (AG)
Failover cluster instance (FCI)
Windows Server Failover Cluster
Best practices
Quick checklist
VM size
Storage
Security
HADR configuration
Application patterns
Collect baseline
Management
Dedicated host
Extend support for SQL Server
How-to guides
Connect to SQL Server VM
Create SQL Server VM
Use the portal
Use Azure PowerShell
Manage
With the Azure portal
License model
Change edition
Change version
Storage
Automated Patching
SQL best practices assessment
Azure Key Vault Integration
Migrate storage to UltraSSD
SQL IaaS Agent extension
Automatic registration
Register single VM
Bulk register multiple VMs
Migrate
SQL Server database to VM
VM to a new region
Business continuity
Configure cluster quorum
Backup and restore
Automated backup (SQL 2016+)
Automated backup (SQL 2014)
Availability group (AG)
Configure AG (multi-subnet)
Configure AG (single subnet)
Failover cluster instance (FCI)
Prepare VM for FCI
Create FCI
Configure connectivity (Single subnet)
Reference
Azure PowerShell
Azure CLI
T-SQL
SQL Server Drivers
REST
Azure Policy built-ins
Resources
FAQ
Pricing
Archived classic RM docs
SQL Server Data Tools (SSDT)
SQL Server Management Studio (SSMS)
SQL Server Tools
Azure Roadmap
MSDN forum
Stack Overflow
Linux
Overview
About Linux SQL Server VMs
Quickstarts
Create SQL VM - Portal
Concepts
SQL IaaS agent extension
How-to guides
Register with SQL IaaS extension
Tutorials
Setting up Azure RHEL VM availability group with STONITH
Configure availability group listener for SQL Server on RHEL virtual machines in
Azure
Setup Always On availability group with DH2i DxEnterprise
Resources
FAQ
SQL Server on Linux Documentation
SQL Server Data Tools (SSDT)
SQL Server Tools
Azure Roadmap
Stack Overflow
Migration guides
From Db2
From Oracle
From SQL Server
Overview
Migration guide
Availability group (AG)
Failover cluster instance (FCI)
Using distributed AG
Prerequisites
Standalone instance
Availability group
Complete migration
Reference
Azure SQL glossary of terms
T-SQL language reference
Azure CLI
Azure PowerShell
.NET
Java
REST
Resource Manager templates for SQL
SQL tools
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
BCP
SQLCMD
SqlPackage
SQL Database Management Library package
SQL connection drivers
SQL Server drivers
ADO.NET
JDBC
Node.js
ODBC
PHP
Python
Ruby
Azure Policy built-ins
DTU benchmark
Resources
Build your skills with Microsoft Learn
SQL Server Blog
Microsoft Azure Blog
Azure Roadmap
Public data sets
Pricing
MSDN forum
Stack Overflow
Troubleshoot
Known issues with SQL Managed Instance
Capacity errors during deployment
Connectivity errors
Common connection issues
Troubleshoot out of memory errors
Import/Export service hangs
Transaction log errors
Request quota increases
Service updates
SSL root certificate expiring
Gateway IP address updates
Periodic maintenance events
Videos
Service updates
Architecture center
Customer stories
What is Azure SQL?
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance SQL Server on Azure VM
Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in
the Azure cloud.
Azure SQL Database : Support modern cloud applications on an intelligent, managed database service, that
includes serverless compute.
Azure SQL Managed Instance : Modernize your existing SQL Server applications at scale with an
intelligent fully managed instance as a service, with almost 100% feature parity with the SQL Server
database engine. Best for most migrations to the cloud.
SQL Ser ver on Azure VMs : Lift-and-shift your SQL Server workloads with ease and maintain 100% SQL
Server compatibility and operating system-level access.
Azure SQL is built upon the familiar SQL Server engine, so you can migrate applications with ease and continue
to use the tools, languages, and resources you're familiar with. Your skills and experience transfer to the cloud,
so you can do even more with what you already have.
Learn how each product fits into Microsoft's Azure SQL data platform to match the right option for your
business requirements. Whether you prioritize cost savings or minimal administration, this article can help you
decide which approach delivers against the business requirements you care about most.
If you're new to Azure SQL, check out the What is Azure SQL video from our in-depth Azure SQL video series:

Overview
In today's data-driven world, driving digital transformation increasingly depends on our ability to manage
massive amounts of data and harness its potential. But today's data estates are increasingly complex, with data
hosted on-premises, in the cloud, or at the edge of the network. Developers who are building intelligent and
immersive applications can find themselves constrained by limitations that can ultimately impact their
experience. Limitations arising from incompatible platforms, inadequate data security, insufficient resources and
price-performance barriers create complexity that can inhibit app modernization and development.
One of the first things to understand in any discussion of Azure versus on-premises SQL Server databases is
that you can use it all. Microsoft's data platform leverages SQL Server technology and makes it available across
physical on-premises machines, private cloud environments, third-party hosted private cloud environments, and
the public cloud.
Fully managed and always up to date
Spend more time innovating and less time patching, updating, and backing up your databases. Azure is the only
cloud with evergreen SQL that automatically applies the latest updates and patches so that your databases are
always up to date—eliminating end-of-support hassle. Even complex tasks like performance tuning, high
availability, disaster recovery, and backups are automated, freeing you to focus on applications.
Protect your data with built-in intelligent security
Azure constantly monitors your data for threats. With Azure SQL, you can:
Remediate potential threats in real time with intelligent advanced threat detection and proactive vulnerability
assessment alerts.
Get industry-leading, multi-layered protection with built-in security controls including T-SQL, authentication,
networking, and key management.
Take advantage of the most comprehensive compliance coverage of any cloud database service.
Business motivations
There are several factors that can influence your decision to choose between the different data offerings:
Cost: Both platform as a service (PaaS) and infrastructure as a service (IaaS) options include base price that
covers underlying infrastructure and licensing. However, with the IaaS option you need to invest additional
time and resources to manage your database, while in PaaS you get these administration features included in
the price. IaaS enables you to shut down resources while you are not using them to decrease the cost, while
PaaS is always running unless you drop and re-create your resources when they are needed.
Administration: PaaS options reduce the amount of time that you need to invest to administer the database.
However, it also limits the range of custom administration tasks and scripts that you can perform or run. For
example, the CLR is not supported with SQL Database, but is supported for an instance of SQL Managed
Instance. Also, no deployment options in PaaS support the use of trace flags.
Service-level agreement: Both IaaS and PaaS provide high, industry standard SLA. PaaS option guarantees
99.99% SLA, while IaaS guarantees 99.95% SLA for infrastructure, meaning that you need to implement
additional mechanisms to ensure availability of your databases. You can attain 99.99% SLA by creating an
additional SQL virtual machine, and implementing the SQL Server Always On availability group high
availability solution.
Time to move to Azure: SQL Server on Azure VM is the exact match of your environment, so migration from
on-premises to the Azure VM is no different than moving the databases from one on-premises server to
another. SQL Managed Instance also enables easy migration; however, there might be some changes that you
need to apply before your migration.

Service comparison

As seen in the diagram, each service offering can be characterized by the level of administration you have over
the infrastructure, and by the degree of cost efficiency.
In Azure, you can have your SQL Server workloads running as a hosted service (PaaS), or a hosted infrastructure
(IaaS). Within PaaS, you have multiple product options, and service tiers within each option. The key question
that you need to ask when deciding between PaaS or IaaS is do you want to manage your database, apply
patches, and take backups, or do you want to delegate these operations to Azure?
Azure SQL Database
Azure SQL Database is a relational database-as-a-service (DBaaS) hosted in Azure that falls into the industry
category of Platform-as-a-Service (PaaS).
Best for modern cloud applications that want to use the latest stable SQL Server features and have time
constraints in development and marketing.
A fully managed SQL Server database engine, based on the latest stable Enterprise Edition of SQL Server.
SQL Database has two deployment options built on standardized hardware and software that is owned,
hosted, and maintained by Microsoft.
With SQL Server, you can use built-in features and functionality that requires extensive configuration (either on-
premises or in an Azure virtual machine). When using SQL Database, you pay-as-you-go with options to scale
up or out for greater power with no interruption. SQL Database has some additional features that are not
available in SQL Server, such as built-in high availability, intelligence, and management.
Azure SQL Database offers the following deployment options:
As a single database with its own set of resources managed via a logical SQL server. A single database is
similar to a contained database in SQL Server. This option is optimized for modern application development
of new cloud-born applications. Hyperscale and serverless options are available.
An elastic pool, which is a collection of databases with a shared set of resources managed via a logical server.
Single databases can be moved into and out of an elastic pool. This option is optimized for modern
application development of new cloud-born applications using the multi-tenant SaaS application pattern.
Elastic pools provide a cost-effective solution for managing the performance of multiple databases that have
variable usage patterns.
Azure SQL Managed Instance
Azure SQL Managed Instance falls into the industry category of Platform-as-a-Service (PaaS), and is best for
most migrations to the cloud. SQL Managed Instance is a collection of system and user databases with a shared
set of resources that is lift-and-shift ready.
Best for new applications or existing on-premises applications that want to use the latest stable SQL Server
features and that are migrated to the cloud with minimal changes. An instance of SQL Managed Instance is
similar to an instance of the Microsoft SQL Server database engine offering shared resources for databases
and additional instance-scoped features.
SQL Managed Instance supports database migration from on-premises with minimal to no database change.
This option provides all of the PaaS benefits of Azure SQL Database but adds capabilities that were
previously only available in SQL Server VMs. This includes a native virtual network and near 100%
compatibility with on-premises SQL Server. Instances of SQL Managed Instance provide full SQL Server
access and feature compatibility for migrating SQL Servers to Azure.
SQL Server on Azure VM
SQL Server on Azure VM falls into the industry category Infrastructure-as-a-Service (IaaS) and allows you to
run SQL Server inside a fully managed virtual machine (VM) in Azure.
SQL Server installed and hosted in the cloud runs on Windows Server or Linux virtual machines running on
Azure, also known as an infrastructure as a service (IaaS). SQL virtual machines are a good option for
migrating on-premises SQL Server databases and applications without any database change. All recent
versions and editions of SQL Server are available for installation in an IaaS virtual machine.
Best for migrations and applications requiring OS-level access. SQL virtual machines in Azure are lift-and-
shift ready for existing applications that require fast migration to the cloud with minimal changes or no
changes. SQL virtual machines offer full administrative control over the SQL Server instance and underlying
OS for migration to Azure.
The most significant difference from SQL Database and SQL Managed Instance is that SQL Server on Azure
Virtual Machines allows full control over the database engine. You can choose when to start
maintenance/patching, change the recovery model to simple or bulk-logged, pause or start the service when
needed, and you can fully customize the SQL Server database engine. With this additional control comes the
added responsibility to manage the virtual machine.
Rapid development and test scenarios when you do not want to buy on-premises non-production SQL
Server hardware. SQL virtual machines also run on standardized hardware that is owned, hosted, and
maintained by Microsoft. When using SQL virtual machines, you can either pay-as-you-go for a SQL Server
license already included in a SQL Server image or easily use an existing license. You can also stop or resume
the VM as needed.
Optimized for migrating existing applications to Azure or extending existing on-premises applications to the
cloud in hybrid deployments. In addition, you can use SQL Server in a virtual machine to develop and test
traditional SQL Server applications. With SQL virtual machines, you have the full administrative rights over a
dedicated SQL Server instance and a cloud-based VM. It is a perfect choice when an organization already has
IT resources available to maintain the virtual machines. These capabilities allow you to build a highly
customized system to address your application’s specific performance and availability requirements.
Comparison table
Additional differences are listed in the following table, but both SQL Database and SQL Managed Instance are
optimized to reduce overall management costs to a minimum for provisioning and managing many databases.
Ongoing administration costs are reduced since you do not have to manage any virtual machines, operating
system, or database software. You do not have to manage upgrades, high availability, or backups.
In general, SQL Database and SQL Managed Instance can dramatically increase the number of databases
managed by a single IT or development resource. Elastic pools also support SaaS multi-tenant application
architectures with features including tenant isolation and the ability to scale to reduce costs by sharing
resources across databases. SQL Managed Instance provides support for instance-scoped features enabling
easy migration of existing applications, as well as sharing resources among databases. Whereas, SQL Server on
Azure VMs provide DBAs with an experience most similar to the on-premises environment they're familiar with.

A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E SQ L SERVER O N A Z URE VM

Supports most on-premises database- Supports almost all on-premises You have full control over the SQL
level capabilities. The most commonly instance-level and database-level Server engine. Supports all on-
used SQL Server features are available. capabilities. High compatibility with premises capabilities.
99.995% availability guaranteed. SQL Server. Up to 99.99% availability.
Built-in backups, patching, recovery. 99.99% availability guaranteed. Full parity with the matching version of
Latest stable Database Engine version. Built-in backups, patching, recovery. on-premises SQL Server.
Ability to assign necessary resources Latest stable Database Engine version. Fixed, well-known Database Engine
(CPU/storage) to individual databases. Easy migration from SQL Server. version.
Built-in advanced intelligence and Private IP address within Azure Virtual Easy migration from SQL Server.
security. Network. Private IP address within Azure Virtual
Online change of resources Built-in advanced intelligence and Network.
(CPU/storage). security. You have the ability to deploy
Online change of resources application or services on the host
(CPU/storage). where SQL Server is placed.

Migration from SQL Server might be There is still some minimal number of You may use manual or automated
challenging. SQL Server features that are not backups.
Some SQL Server features are not available. You need to implement your own
available. Configurable maintenance windows. High-Availability solution.
Configurable maintenance windows. Compatibility with the SQL Server There is a downtime while changing
Compatibility with the SQL Server version can be achieved only using the resources(CPU/storage)
version can be achieved only using database compatibility levels.
database compatibility levels.
Private IP address support with Azure
Private Link.
A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E SQ L SERVER O N A Z URE VM

Databases of up to 100 TB. Up to 16 TB. SQL Server instances with up to 256


TB of storage. The instance can
support as many databases as needed.

On-premises application can access Native virtual network implementation With SQL virtual machines, you can
data in Azure SQL Database. and connectivity to your on-premises have applications that run partly in the
environment using Azure Express cloud and partly on-premises. For
Route or VPN Gateway. example, you can extend your on-
premises network and Active Directory
Domain to the cloud via Azure Virtual
Network. For more information on
hybrid cloud solutions, see Extending
on-premises data solutions to the
cloud.

Cost
Whether you're a startup that is strapped for cash, or a team in an established company that operates under
tight budget constraints, limited funding is often the primary driver when deciding how to host your databases.
In this section, you learn about the billing and licensing basics in Azure associated with the Azure SQL family of
services. You also learn about calculating the total application cost.
Billing and licensing basics
Currently, both SQL Database and SQL Managed Instance are sold as a service and are available with
several options and in several service tiers with different prices for resources, all of which are billed hourly at a
fixed rate based on the service tier and compute size you choose. For the latest information on the current
supported service tiers, compute sizes, and storage amounts, see DTU-based purchasing model for SQL
Database and vCore-based purchasing model for both SQL Database and SQL Managed Instance.
With SQL Database, you can choose a service tier that fits your needs from a wide range of prices starting
from 5$/month for basic tier and you can create elastic pools to share resources among databases to reduce
costs and accommodate usage spikes.
With SQL Managed Instance, you can also bring your own license. For more information on bring-your-own
licensing, see License Mobility through Software Assurance on Azure or use the Azure Hybrid Benefit
calculator to see how to save up to 40% .
In addition, you are billed for outgoing Internet traffic at regular data transfer rates. You can dynamically adjust
service tiers and compute sizes to match your application’s varied throughput needs.
With SQL Database and SQL Managed Instance , the database software is automatically configured, patched,
and upgraded by Azure, which reduces your administration costs. In addition, its built-in backup capabilities help
you achieve significant cost savings, especially when you have a large number of databases.
With SQL on Azure VMs , you can use any of the platform-provided SQL Server images (which includes a
license) or bring your SQL Server license. All the supported SQL Server versions (2008R2, 2012, 2014, 2016,
2017, 2019) and editions (Developer, Express, Web, Standard, Enterprise) are available. In addition, Bring-Your-
Own-License versions (BYOL) of the images are available. When using the Azure provided images, the
operational cost depends on the VM size and the edition of SQL Server you choose. Regardless of VM size or
SQL Server edition, you pay per-minute licensing cost of SQL Server and the Windows or Linux Server, along
with the Azure Storage cost for the VM disks. The per-minute billing option allows you to use SQL Server for as
long as you need without buying addition SQL Server licenses. If you bring your own SQL Server license to
Azure, you are charged for server and storage costs only. For more information on bring-your-own licensing,
see License Mobility through Software Assurance on Azure. In addition, you are billed for outgoing Internet
traffic at regular data transfer rates.
Calculating the total application cost
When you start using a cloud platform, the cost of running your application includes the cost for new
development and ongoing administration costs, plus the public cloud platform service costs.
For more information on pricing, see the following resources:
SQL Database & SQL Managed Instance pricing
Virtual machine pricing for SQL and for Windows
Azure Pricing Calculator

Administration
For many businesses, the decision to transition to a cloud service is as much about offloading complexity of
administration as it is cost. With IaaS and PaaS, Azure administers the underlying infrastructure and
automatically replicates all data to provide disaster recovery, configures and upgrades the database software,
manages load balancing, and does transparent failover if there is a server failure within a data center.
With SQL Database and SQL Managed Instance , you can continue to administer your database, but you
no longer need to manage the database engine, the operating system, or the hardware. Examples of items
you can continue to administer include databases and logins, index and query tuning, and auditing and
security. Additionally, configuring high availability to another data center requires minimal configuration and
administration.
With SQL on Azure VM , you have full control over the operating system and SQL Server instance
configuration. With a VM, it's up to you to decide when to update/upgrade the operating system and
database software and when to install any additional software such as anti-virus. Some automated features
are provided to dramatically simplify patching, backup, and high availability. In addition, you can control the
size of the VM, the number of disks, and their storage configurations. Azure allows you to change the size of
a VM as needed. For information, see Virtual Machine and Cloud Service Sizes for Azure.

Service-level agreement (SLA)


For many IT departments, meeting up-time obligations of a service-level agreement (SLA) is a top priority. In
this section, we look at what SLA applies to each database hosting option.
For both Azure SQL Database and Azure SQL Managed Instance , Microsoft provides an availability SLA of
99.99%. For the latest information, see Service-level agreement.
For SQL on Azure VM , Microsoft provides an availability SLA of 99.95% that covers just the virtual machine.
This SLA does not cover the processes (such as SQL Server) running on the VM and requires that you host at
least two VM instances in an availability set. For the latest information, see the VM SLA. For database high
availability (HA) within VMs, you should configure one of the supported high availability options in SQL Server,
such as Always On availability groups. Using a supported high availability option doesn't provide an additional
SLA, but allows you to achieve >99.99% database availability.

Time to move to Azure


Azure SQL Database is the right solution for cloud-designed applications when developer productivity and
fast time-to-market for new solutions are critical. With programmatic DBA-like functionality, it is perfect for
cloud architects and developers as it lowers the need for managing the underlying operating system and
database.
Azure SQL Managed Instance greatly simplifies the migration of existing applications to Azure, enabling you
to bring migrated database applications to market in Azure quickly.
SQL on Azure VM is perfect if your existing or new applications require large databases or access to all
features in SQL Server or Windows/Linux, and you want to avoid the time and expense of acquiring new on-
premises hardware. It is also a good fit when you want to migrate existing on-premises applications and
databases to Azure as-is - in cases where SQL Database or SQL Managed Instance is not a good fit. Since you do
not need to change the presentation, application, and data layers, you save time and budget on re-architecting
your existing solution. Instead, you can focus on migrating all your solutions to Azure and in doing some
performance optimizations that may be required by the Azure platform. For more information, see Performance
Best Practices for SQL Server on Azure Virtual Machines.

Create and manage Azure SQL resources with the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL resources including your
SQL Server on Azure virtual machines (VMs).
To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search for and select Azure
SQL in any page.

NOTE
Azure SQL provides a quick and easy way to access all of your SQL resources in the Azure portal, including single and
pooled databases in Azure SQL Database as well as the logical server hosting them, SQL Managed Instances, and SQL
Server on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-related services.

To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select +
Create .

After selecting + Create , view additional information about the different options by selecting Show details on
any tile.
For details, see:
Create a single database
Create an elastic pool
Create a managed instance
Create a SQL virtual machine

Next steps
See Your first Azure SQL Database to get started with SQL Database.
See Your first Azure SQL Managed Instance to get started with SQL Managed Instance.
See SQL Database pricing.
See Azure SQL Managed Instance pricing.
See Provision a SQL Server virtual machine in Azure to get started with SQL Server on Azure VMs.
Identify the right SQL Database or SQL Managed Instance SKU for your on-premises database.
vCore purchasing model overview - Azure SQL
Database and Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article provides a brief overview of the vCore purchasing model used by both Azure SQL Database and
Azure SQL Managed Instance. To learn more about the vCore model for each product, review Azure SQL
Database and Azure SQL Managed Instance.

Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage

IMPORTANT
In Azure SQL Database, compute resources (CPU and memory), I/O, and data and log storage are charged per database
or elastic pool. Backup storage is charged per each database.

The vCore purchasing model provides transparency in database CPU, memory, and storage resource allocation,
hardware configuration, higher scaling granularity, and pricing discounts with the Azure Hybrid Benefit (AHB)
and Reserved Instance (RI).
In the case of Azure SQL Database, the vCore purchasing model provides higher compute, memory, I/O, and
storage limits than the DTU model.

Service tiers
Two vCore service tiers are available in both Azure SQL Database and Azure SQL Managed Instance:
General purpose is a budget-friendly tier designed for most workloads with common performance and
availability requirements.
Business Critical tier is designed for performance-sensitive workloads with strict availability requirements.
The Hyperscale service tier is also available for single databases in Azure SQL Database. This service tier is
designed for most business workloads, providing highly scalable storage, read scale-out, fast scaling, and fast
database restore capabilities.
Resource limits
For more information on resource limits, see:
Azure SQL Database: logical server, single databases, pooled databases
Azure SQL Managed Instance

Compute cost
The vCore-based purchasing model has a provisioned compute tier for both Azure SQL Database and Azure
SQL Managed Instance, and a serverless compute tier for Azure SQL Database.
In the provisioned compute tier, the compute cost reflects the total compute capacity continuously provisioned
for the application independent of workload activity. Choose the resource allocation that best suits your
business needs based on vCore and memory requirements, then scale resources up and down as needed by
your workload.
In the serverless compute tier for Azure SQL database, compute resources are auto-scaled based on workload
capacity and billed for the amount of compute used, per second.
Since three additional replicas are automatically allocated in the Business Critical service tier, the price is
approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price
per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.

Data and log storage


The following factors affect the amount of storage used for data and log files, and apply to General Purpose and
Business Critical tiers.
Each compute size supports a configurable maximum data size, with a default of 32 GB.
When you configure maximum data size, an additional 30 percent of billable storage is automatically added
for the log file.
In the General Purpose service tier, tempdb uses local SSD storage, and this storage cost is included in the
vCore price.
In the Business Critical service tier, tempdb shares local SSD storage with data and log files, and tempdb
storage cost is included in the vCore price.
In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured
for a database, elastic pool, or managed instance.
For SQL Database, you can select any maximum data size between 1 GB and the supported storage size
maximum, in 1 GB increments. For SQL Managed Instance, select data sizes in multiples of 32 GB up to the
supported storage size maximum.
To monitor the current allocated and used data storage size in SQL Database, use the allocated_data_storage and
storage Azure Monitor metrics respectively.
For both SQL Database and SQL Managed instance, to monitor the current allocated and used storage size of
individual data and log files in a database by using T-SQL, use the sys.database_files view and the
FILEPROPERTY(... , 'SpaceUsed') function.

TIP
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.
Backup storage
Storage for database backups is allocated to support the point-in-time restore (PITR) and long-term retention
(LTR) capabilities of SQL Database and SQL Managed Instance. This storage is separate from data and log file
storage, and is billed separately.
PITR : In General Purpose and Business Critical tiers, individual database backups are copied to Azure storage
automatically. The storage size increases dynamically as new backups are created. The storage is used by full,
differential, and transaction log backups. The storage consumption depends on the rate of change of the
database and the retention period configured for backups. You can configure a separate retention period for
each database between 1 and 35 days for SQL Database, and 0 to 35 days for SQL Managed Instance. A
backup storage amount equal to the configured maximum data size is provided at no extra charge.
LTR : You also have the option to configure long-term retention of full backups for up to 10 years. If you set
up an LTR policy, these backups are stored in Azure Blob storage automatically, but you can control how
often the backups are copied. To meet different compliance requirements, you can select different retention
periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much
storage will be used for LTR backups. For more information, see Long-term backup retention.

Next steps
To get started, see:
Creating a SQL Database using the Azure portal
Creating a SQL Managed Instance using the Azure portal
For pricing details, see
Azure SQL Database pricing page
Azure SQL Managed Instance single instance pricing page
Azure SQL Managed Instance pools pricing page
For details about the specific compute and storage sizes available in the General Purpose and Business Critical
service tiers, see:
vCore-based resource limits for Azure SQL Database.
vCore-based resource limits for pooled Azure SQL Database.
vCore-based resource limits for Azure SQL Managed Instance.
Azure Hybrid Benefit - Azure SQL Database & SQL
Managed Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure Hybrid Benefit allows you to exchange your existing licenses for discounted rates on Azure SQL Database
and Azure SQL Managed Instance. You can save up to 30 percent or more on SQL Database and SQL Managed
Instance by using your Software Assurance-enabled SQL Server licenses on Azure. The Azure Hybrid Benefit
page has a calculator to help determine savings.
Changing to Azure Hybrid Benefit does not require any downtime.

Overview

Diagram of vCore pricing structure for SQL Database. 'License Included' pricing is made up of base compute
and SQL license components. Azure Hybrid Benefit pricing is made up of base compute and software assurance
components.
With Azure Hybrid Benefit, you pay only for the underlying Azure infrastructure by using your existing SQL
Server license for the SQL Server database engine itself (Base Compute pricing). If you do not use Azure Hybrid
Benefit, you pay for both the underlying infrastructure and the SQL Server license (License-Included pricing).
For Azure SQL Database, Azure Hybrid Benefit is only available when using the provisioned compute tier of the
vCore-based purchasing model. Azure Hybrid Benefit doesn't apply to DTU-based purchasing models or the
serverless compute tier.

Enable Azure Hybrid Benefit


Azure SQL Database
You can choose or change your licensing model for Azure SQL Database using the Azure portal or the API of
your choice.
You can only apply the Azure Hybrid licensing model when you choose a vCore-based purchasing model and
the provisioned compute tier for your Azure SQL Database. Azure Hybrid Benefit isn't available for service tiers
under the DTU-based purchasing model or for the serverless compute tier.
Portal
PowerShell
Azure CLI
REST API

To set or update the license type using the Azure portal:


For new databases, during creation, select Configure database on the Basics tab and select the option to
Save money .
For existing databases, select Compute + storage in the Settings menu and select the option to Save
money .
If you don't see the Save money option in the Azure portal, verify that you selected a service tier using the
vCore-based purchasing model and the provisioned compute tier.
Azure SQL Managed Instance
You can choose or change your licensing model for Azure SQL Managed Instance using the Azure portal or the
API of your choice.
Portal
PowerShell
Azure CLI
REST API

To set or update the license type using the Azure portal:


For new managed instances, during creation, select Configure Managed Instance on the Basics tab and
select the option for Azure Hybrid Benefit .
For existing managed instances, select Compute + storage in the Settings menu and select the option for
Azure Hybrid Benefit .

Frequently asked questions


Are there dual-use rights with Azure Hybrid Benefit for SQL Server?
You have 180 days of dual use rights of the license to ensure migrations are running seamlessly. After that 180-
day period, you can only use the SQL Server license on Azure. You no longer have dual use rights on-premises
and on Azure.
How does Azure Hybrid Benefit for SQL Server differ from license mobility?
We offer license mobility benefits to SQL Server customers with Software Assurance. License mobility allows
reassignment of their licenses to a partner's shared servers. You can use this benefit on Azure IaaS and AWS
EC2.
Azure Hybrid Benefit for SQL Server differs from license mobility in two key areas:
It provides economic benefits for moving highly virtualized workloads to Azure. SQL Server Enterprise
Edition customers can get four cores in Azure in the General Purpose SKU for every core they own on-
premises for highly virtualized applications. License mobility doesn't allow any special cost benefits for
moving virtualized workloads to the cloud.
It provides for a PaaS destination on Azure (SQL Managed Instance) that's highly compatible with SQL Server.
What are the specific rights of the Azure Hybrid Benefit for SQL Server?
SQL Database and SQL Managed Instance customers have the following rights associated with Azure Hybrid
Benefit for SQL Server:

W H AT DO ES A Z URE H Y B RID B EN EF IT F O R SQ L SERVER GET


L IC EN SE F O OT P RIN T Y O U?

SQL Server Enterprise Edition core customers with SA Can pay base rate on Hyperscale, General Purpose, or
Business Critical SKU

One core on-premises = Four vCores in Hyperscale SKU

One core on-premises = Four vCores in General Purpose


SKU

One core on-premises = One vCore in Business Critical


SKU

SQL Server Standard Edition core customers with SA Can pay base rate on Hyperscale, General Purpose, or
Business Critical SKU

One core on-premises = One vCore in Hyperscale SKU

One core on-premises = One vCore in General Purpose


SKU

Four cores on-premises = One vCore in Business Critical


SKU

Next steps
For help with choosing an Azure SQL deployment option, see Service comparison.
For a comparison of SQL Database and SQL Managed Instance features, see Features of SQL Database and
SQL Managed Instance.
Save costs for resources with reserved capacity -
Azure SQL Database & SQL Managed Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Save money with Azure SQL Database and SQL Managed Instance by committing to a reservation for compute
resources compared to pay-as-you-go prices. With reserved capacity, you make a commitment for SQL
Database and/or SQL Managed Instance use for a period of one or three years to get a significant discount on
the compute costs. To purchase reserved capacity, you need to specify the Azure region, deployment type,
performance tier, and term.
You do not need to assign the reservation to a specific database or managed instance. Matching existing
deployments that are already running or ones that are newly deployed automatically get the benefit. Hence, by
purchasing a reserved capacity, existing resources infrastucture would not be modified and thus no
failover/downtime is triggered on existing resources. By purchasing a reservation, you commit to usage for the
compute costs for a period of one or three years. As soon as you buy a reservation, the compute charges that
match the reservation attributes are no longer charged at the pay-as-you go rates.
A reservation applies to both primary and billable secondary compute replicas, but does not cover software,
networking, or storage charges associated with the service. At the end of the reservation term, the billing benefit
expires and the database or managed instance is billed at the pay-as-you go price. Reservations do not
automatically renew. For pricing information, see the reserved capacity offering.
You can buy reserved capacity in the Azure portal. Pay for the reservation up front or with monthly payments. To
buy reserved capacity:
You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
For Enterprise subscriptions, Add Reser ved Instances must be enabled in the EA portal. Or, if that setting is
disabled, you must be an EA Admin on the subscription. Reserved capacity.
For more information about how enterprise customers and Pay-As-You-Go customers are charged for
reservation purchases, see Understand Azure reservation usage for your Enterprise enrollment and Understand
Azure reservation usage for your Pay-As-You-Go subscription.

NOTE
Purchasing reserved capacity does not pre-allocate or reserve specific infrastructure resources (virtual machines or nodes)
for your use.

Determine correct size before purchase


The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-
deployed database or managed instance within a specific region and using the same performance tier and
hardware configuration.
For example, let's suppose that you are running one General Purpose, Gen5 – 16 vCore elastic pool and two
Business Critical Gen5 – 4 vCore single databases. Further, let's supposed that you plan to deploy within the next
month an additional General Purpose Gen5 – 16 vCore elastic pool and one Business Critical Gen5 – 32 vCore
elastic pool. Also, let's suppose that you know that you will need these resources for at least 1 year. In this case,
you should purchase a 32 (2x16) vCores 1-year reservation for single database/elastic pool General Purpose -
Gen5 and a 40 (2x4 + 32) vCore 1-year reservation for single database/elastic pool Business Critical - Gen5.

Buy reserved capacity


1. Sign in to the Azure portal.
2. Select All ser vices > Reser vations .
3. Select Add and then in the Purchase Reser vations pane, select SQL Database to purchase a new
reservation for SQL Database.
4. Fill in the required fields. Existing databases in SQL Database and SQL Managed Instance that match the
attributes you select qualify to get the reserved capacity discount. The actual number of databases or
managed instances that get the discount depends on the scope and quantity selected.

The following table describes required fields.

F IEL D DESC RIP T IO N

Subscription The subscription used to pay for the capacity


reservation. The payment method on the subscription is
charged the upfront costs for the reservation. The
subscription type must be an enterprise agreement (offer
number MS-AZR-0017P or MS-AZR-0148P) or an
individual agreement with pay-as-you-go pricing (offer
number MS-AZR-0003P or MS-AZR-0023P). For an
enterprise subscription, the charges are deducted from
the enrollment's Azure Prepayment (previously called
monetary commitment) balance or charged as overage.
For an individual subscription with pay-as-you-go
pricing, the charges are billed to the credit card or
invoice payment method on the subscription.
F IEL D DESC RIP T IO N

Scope The vCore reservation's scope can cover one subscription


or multiple subscriptions (shared scope). If you select

Shared , the vCore reservation discount is applied to the


database or managed instance running in any
subscriptions within your billing context. For enterprise
customers, the shared scope is the enrollment and
includes all subscriptions within the enrollment. For Pay-
As-You-Go customers, the shared scope is all Pay-As-
You-Go subscriptions created by the account
administrator.

Single subscription , the vCore reservation discount is


applied to the databases or managed instances in this
subscription.

Single resource group , the reservation discount is


applied to the instances of databases or managed
instances in the selected subscription and the selected
resource group within that subscription.

Management group , the reservation discount is


applied to the matching resource in the list of
subscriptions that are a part of both the management
group and billing scope.

Region The Azure region that's covered by the capacity


reservation.

Deployment Type The SQL resource type that you want to buy the
reservation for.

Performance Tier The service tier for the databases or managed instances.

Term One year or three years.

Quantity The amount of compute resources being purchased


within the capacity reservation. The quantity is a number
of vCores in the selected Azure region and Performance
tier that are being reserved and will get the billing
discount. For example, if you run or plan to run multiple
databases with the total compute capacity of Gen5 16
vCores in the East US region, then you would specify the
quantity as 16 to maximize the benefit for all the
databases.

5. Review the cost of the capacity reservation in the Costs section.


6. Select Purchase .
7. Select View this Reser vation to see the status of your purchase.

Cancel, exchange, or refund reservations


You can cancel, exchange, or refund reservations with certain limitations. For more information, see Self-service
exchanges and refunds for Azure Reservations.
vCore size flexibility
vCore size flexibility helps you scale up or down within a performance tier and region, without losing the
reserved capacity benefit. Reserved capacity also provides you with the flexibility to temporarily move your hot
databases in and out of elastic pools (within the same region and performance tier) as part of your normal
operations without losing the reserved capacity benefit. By keeping an unapplied buffer in your reservation, you
can effectively manage the performance spikes without exceeding your budget.

Limitation
You cannot reserve DTU-based (basic, standard, or premium) databases in SQL Database. Reserved capacity
pricing is only supported for features and products that are in General Availability state.

Need help? Contact us


If you have questions or need help, create a support request.

Next steps
The vCore reservation discount is applied automatically to the number of databases or managed instances that
match the capacity reservation scope and attributes. You can update the scope of the capacity reservation
through the Azure portal, PowerShell, Azure CLI, or the API.
For information on Azure SQL Database service tiers for the vCore model, see vCore model overview - Azure
SQL Database.
For information on Azure SQL Managed Instance service tiers for the vCore model, see vCore model
overview - Azure SQL Managed Instance.
To learn how to manage the capacity reservation, see manage reserved capacity.
To learn more about Azure Reservations, see the following articles:
What are Azure Reservations?
Manage Azure Reservations
Understand Azure Reservations discount
Understand reservation usage for your Pay-As-You-Go subscription
Understand reservation usage for your Enterprise enrollment
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
General Purpose service tier - Azure SQL Database
and Azure SQL Managed Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance are based on the SQL Server database engine
architecture adapted for the cloud environment in order to ensure default availability even in the cases of
infrastructure failures.
This article describes and compares the General Purpose service tier used by Azure SQL Database and Azure
SQL Managed instance. The General Purpose service tier is best used for budget-oriented, balanced compute
and storage options.

Overview
The architectural model for the General Purpose service tier is based on a separation of compute and storage.
This architectural model relies on high availability and reliability of Azure Blob storage that transparently
replicates database files and guarantees no data loss if underlying infrastructure failure happens.
The following figure shows four nodes in standard architectural model with the separated compute and storage
layers.

In the architectural model for the General Purpose service tier, there are two layers:
A stateless compute layer that is running the sqlservr.exe process and contains only transient and cached
data (for example – plan cache, buffer pool, column store pool). This stateless node is operated by Azure
Service Fabric that initializes process, controls health of the node, and performs failover to another place if
necessary.
A stateful data layer with database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure Blob storage
guarantees that there will be no data loss of any record that is placed in any database file. Azure Storage has
built-in data availability/redundancy that ensures that every record in log file or page in data file will be
preserved even if the process crashes.
Whenever the database engine or operating system is upgraded, some part of underlying infrastructure fails, or
if some critical issue is detected in the sqlservr.exe process, Azure Service Fabric will move the stateless
process to another stateless compute node. There is a set of spare nodes that is waiting to run new compute
service if a failover of the primary node happens in order to minimize failover time. Data in Azure storage layer
is not affected, and data/log files are attached to newly initialized process. This process guarantees 99.99%
availability by default and 99.995% availability when zone redundancy is enabled. There may be some
performance impacts on heavy workloads that are running due to transition time and the fact the new node
starts with cold cache.

When to choose this service tier


The General Purpose service tier is a default service tier in Azure SQL Database and Azure SQL Managed
Instance that is designed for most of generic workloads. If you need a fully managed database engine with a
default SLA and storage latency between 5 and 10 ms, the General Purpose tier is the option for you.

Compare General Purpose resource limits


Review the table in this section for a brief overview comparison of the resource limits for Azure SQL Database
and Azure SQL managed Instance in the General Purpose service tier.
For comprehensive details about resource limits, review:
Azure SQL Database: vCore single database, vCore pooled database , Hyperscale, DTU single database and
DTU pooled databases
Azure SQL Managed Instance: vCore instance limits
To compare features between SQL Database and SQL Managed Instance, see the database engine features.
The following table shows resource limits for both Azure SQL Database and Azure SQL Managed Instance in the
General Purpose service tier:

C AT EGO RY A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Compute size 1 - 80 vCores 4, 8, 16, 24, 32, 40, 64, 80 vCores

Storage type Remote storage Remote storage

Storage size 1 GB - 4 TB 2 GB - 16 TB

Tempdb size 32 GB per vCore 24 GB per vCore

Log write throughput Single databases: 4.5 MB/s per vCore General Purpose: 3 MB/s per vCore
(max 50 MB/s) (max 120 MB/s)
Elastic pools: 6 MB/s per vCore (max Business Critical: 4 MB/s per vCore
62.5 MB/s) (max 96 MB/s)

Availability Default SLA Default SLA


99.995% SLA with zone redundancy

Backups 1-35 days (7 days by default) 1-35 days (7 days by default)

Read-only replicas 0 built-in 0 built-in


0 - 4 geo-replicas 0 - 1 geo-replicas using auto-failover
groups
C AT EGO RY A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Pricing/Billing vCore, reserved storage, backup vCore, reserved storage, backup


storage, and geo-replicas are charged. storage, and geo-replicas are charged.
IOPS is not charged. IOPS is not charged.

Discount models Reserved instances Reserved instances


Azure Hybrid Benefit (not available on Azure Hybrid Benefit (not available on
dev/test subscriptions) dev/test subscriptions)
Enterprise and Pay-As-You-Go Enterprise and Pay-As-You-Go
Dev/Test subscriptions Dev/Test subscriptions

Next steps
Find resource characteristics (number of cores, I/O, memory) of the General Purpose/standard tier in SQL
Managed Instance, single database in vCore model or DTU model, or elastic pool in vCore model and DTU
model.
Learn about Business Critical and Hyperscale service tiers.
Learn about Service Fabric.
For more options for high availability and disaster recovery, see Business Continuity.
Business Critical tier - Azure SQL Database and
Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance are both based on the SQL Server database engine
architecture adjusted for the cloud environment in order to ensure default SLA availability even in cases of
infrastructure failures.
This article describes and compares the Business Critical service tier used by Azure SQL Database and Azure
SQL Managed instance. The Business Critical service tier is best used for applications requiring high transaction
rate, low IO latency, and high IO throughput. This service tier offers the highest resilience to failures and fast
failovers using multiple synchronously updated replicas.

Overview
The Business Critical service tier model is based on a cluster of database engine processes. This architectural
model relies on a fact that there's always a quorum of available database engine nodes and has minimal
performance impact on your workload even during maintenance activities.
Azure upgrades and patches underlying operating system, drivers, and SQL Server database engine
transparently with the minimal down-time for end users.
In the Business Critical model, compute and storage is integrated on each node. High availability is achieved by
replication of data between database engine processes on each node of a four node cluster, with each node
using locally attached SSD as data storage. This technology is similar to SQL Server Always On availability
groups.
Both the SQL Server database engine process and underlying .mdf/.ldf files are placed on the same node with
locally attached SSD storage providing low latency to your workload. High availability is implemented using
technology similar to SQL Server Always On availability groups. Every database is a cluster of database nodes
with one primary database that is accessible for customer workloads, and a three secondary processes
containing copies of data. The primary node constantly pushes changes to the secondary nodes in order to
ensure that the data is available on secondary replicas if the primary node fails for any reason. Failover is
handled by the SQL Server database engine – one secondary replica becomes the primary node and a new
secondary replica is created to ensure there are enough nodes in the cluster. The workload is automatically
redirected to the new primary node.
In addition, the Business Critical cluster has built-in Read Scale-Out capability that provides free-of charge built-
in read-only replica that can be used to run read-only queries (for example reports) that shouldn't affect
performance of your primary workload.

When to choose this service tier


The Business Critical service tier is designed for applications that require low-latency responses from the
underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-
load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary
database.
The key reasons why you should choose Business Critical service tier instead of General Purpose tier are:
Low I/O latency requirements – workloads that need a fast response from the storage layer (1-2
milliseconds in average) should use Business Critical tier.
Workload with repor ting and analytic queries that can be redirected to the free-of-charge secondary
read-only replica.
Higher resiliency and faster recover y from failures . In a case of system failure, the database on
primary instance will be disabled and one of the secondary replicas will be immediately became new read-
write primary database that is ready to process queries. The database engine doesn't need to analyze and
redo transactions from the log file and load all data in the memory buffer.
Advanced data corruption protection . The Business Critical tier leverages database replicas behind-the-
scenes for business continuity purposes, and so the service also then leverages automatic page repair, which
is the same technology used for SQL Server database mirroring and availability groups. In the event that a
replica can't read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another
replica, replacing the unreadable page without data loss or customer downtime. This functionality is
applicable in General Purpose tier if the database has geo-secondary replica.
Higher availability - The Business Critical tier in Multi-AZ configuration provides resiliency to zonal failures
and a higher availability SLA.
Fast geo-recover y - When active geo-replication is configured, the Business Critical tier has a guaranteed
Recovery Point Objective (RPO) of 5 seconds and Recovery Time Objective (RTO) of 30 seconds for 100% of
deployed hours.

Compare Business Critical resource limits


Review the table in this section for a brief overview comparison of the resource limits for Azure SQL Database
and Azure SQL managed Instance in the Business Critical service tier.
For comprehensive details about resource limits, review:
Azure SQL Database: vCore single database, vCore pooled database , Hyperscale, DTU single database and
DTU pooled databases
Azure SQL Managed Instance: vCore instance limits
To compare features between SQL Database and SQL Managed Instance, see the database engine features.
The following table shows resource limits for both Azure SQL Database and Azure SQL Managed Instance in the
Business Critical service tier.

C AT EGO RY A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Compute size 1 to 128 vCores 4, 8, 16, 24, 32, 40, 64, 80 vCores

Storage type Local SSD storage Local SSD storage

Storage size 1 GB – 4 TB 32 GB – 16 TB

Tempdb size 32 GB per vCore Up to 4 TB - limited by storage size

Log write throughput Single databases: 12 MB/s per vCore 4 MB/s per vCore (max 48 MB/s)
(max 96 MB/s)
Elastic pools: 15 MB/s per vCore (max
120 MB/s)

Availability Default SLA Default SLA


99.995% SLA with zone redundancy

Backups RA-GRS, 1-35 days (7 days by default) RA-GRS, 1-35 days (7 days by default)
C AT EGO RY A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Read-only replicas 1 built-in high availability replica is 1 built-in high availability replica is
readable readable
0 - 4 geo-replicas 0 - 1 geo-replicas using auto-failover
groups

Pricing/Billing vCore, reserved storage, backup vCore, reserved storage, backup


storage, and geo-replicas are charged. storage, and geo-replicas are charged.
High availability replicas aren't charged. High availability replicas aren't charged.
IOPS isn't charged. IOPS isn't charged.

Discount models Reserved instances Reserved instances


Azure Hybrid Benefit (not available on Azure Hybrid Benefit (not available on
dev/test subscriptions) dev/test subscriptions)
Enterprise and Pay-As-You-Go Enterprise and Pay-As-You-Go
Dev/Test subscriptions Dev/Test subscriptions

Next steps
Find resource characteristics (number of cores, I/O, memory) of Business Critical tier in SQL Managed
Instance, Single database in vCore model or DTU model, or Elastic pool in vCore model and DTU model.
Learn about General Purpose and Hyperscale service tiers.
Learn about Service Fabric.
For more options for high availability and disaster recovery, see Business Continuity.
Features comparison: Azure SQL Database and
Azure SQL Managed Instance
7/12/2022 • 14 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and SQL Managed Instance share a common code base with the latest stable version of
SQL Server. Most of the standard SQL language, query processing, and database management features are
identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are:
Language features - Control of flow language keywords, Cursors, Data types, DML statements, Predicates,
Sequence numbers, Stored procedures, and Variables.
Database features - Automatic tuning (plan forcing), Change tracking, Database collation, Contained
databases, Contained users, Data compression, Database configuration settings, Online index operations,
Partitioning, and Temporal tables (see getting started guide).
Security features - Application roles, Dynamic data masking (see getting started guide), Row Level Security,
and Threat detection - see getting started guides for SQL Database and SQL Managed Instance.
Multi-model capabilities - Graph processing, JSON data (see getting started guide), OPENXML, Spatial,
OPENJSON, and XML indexes.
Azure manages your databases and guarantees their high-availability. Some features that might affect high-
availability or can't be used in PaaS world have limited functionalities in SQL Database and SQL Managed
Instance. These features are described in the tables below.
If you need more details about the differences, you can find them in the separate pages:
Azure SQL Database vs. SQL Server differences
Azure SQL Managed Instance vs. SQL Server differences

Features of SQL Database and SQL Managed Instance


The following table lists the major features of SQL Server and provides information about whether the feature is
partially or fully supported in Azure SQL Database and Azure SQL Managed Instance, with a link to more
information about the feature.

F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Always Encrypted Yes - see Cert store and Key vault Yes - see Cert store and Key vault

Always On Availability Groups 99.99-99.995% availability is 99.99.% availability is guaranteed for


guaranteed for every database. every database and can't be managed
Disaster recovery is discussed in by user. Disaster recovery is discussed
Overview of business continuity with in Overview of business continuity
Azure SQL Database with Azure SQL Database. Use Auto-
failover groups to configure a
secondary SQL Managed Instance in
another region. SQL Server instances
and SQL Database can't be used as
secondaries for SQL Managed
Instance.
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Attach a database No No

Auditing Yes Yes, with some differences

Azure Active Directory (Azure AD) Yes. Azure AD users only. Yes. Including server-level Azure AD
authentication logins.

BACKUP command No, only system-initiated automatic Yes, user initiated copy-only backups
backups - see Automated backups to Azure Blob storage (automatic
system backups can't be initiated by
user) - see Backup differences

Built-in functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences

BULK INSERT statement Yes, but just from Azure Blob storage Yes, but just from Azure Blob Storage
as a source. as a source - see differences.

Certificates and asymmetric keys Yes, without access to file system for Yes, without access to file system for
BACKUP and CREATE operations. BACKUP and CREATE operations -
see certificate differences.

Change data capture - CDC Yes, for S3 tier and above. Basic, S0, S1, Yes
S2 are not supported.

Collation - server/instance No, default server collation Yes, can be set when the instance is
SQL_Latin1_General_CP1_CI_AS is created and can't be updated later.
always used.

Columnstore indexes Yes - Premium tier, Standard tier - S3 Yes


and above, General Purpose tier,
Business Critical, and Hyperscale tiers

Common language runtime - CLR No Yes, but without access to file system
in CREATE ASSEMBLY statement - see
CLR differences

Credentials Yes, but only database scoped Yes, but only Azure Key Vault and
credentials. SHARED ACCESS SIGNATURE are
supported - see details

Cross-database/three-part name No - see Elastic queries Yes


queries

Cross-database transactions No Yes, within the instance. See Linked


server differences for cross-instance
queries.

Database mail - DbMail No Yes

Database mirroring No No

Database snapshots No No
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

DBCC statements Most - see individual statements Yes - see DBCC differences

DDL statements Most - see individual statements Yes - see T-SQL differences

DDL triggers Database only Yes

Distributed partition views No Yes

Distributed transactions - MS DTC No - see Elastic transactions No - see Elastic transactions

DML triggers Most - see individual statements Yes

DMVs Most - see individual DMVs Yes - see T-SQL differences

Elastic query (in public preview) Yes, with required RDBMS type. No, use native cross-DB queries and
Linked Server instead

Event notifications No - see Alerts No

Expressions Yes Yes

Extended events (XEvent) Some - see Extended events in SQL Yes - see Extended events differences
Database

Extended stored procedures No No

Files and file groups Primary file group only Yes. File paths are automatically
assigned and the file location can't be
specified in
ALTER DATABASE ADD FILE
statement.

Filestream No No

Full-text search (FTS) Yes, but third-party word breakers are Yes, but third-party word breakers are
not supported not supported

Functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences

In-memory optimization Yes in Premium and Business Critical Yes in Business Critical service tier
service tiers.
Limited support for non-persistent In-
Memory OLTP objects such as
memory-optimized table variables in
Hyperscale service tier.

Language elements Most - see individual elements Yes - see T-SQL differences

Ledger Yes No
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Linked servers No - see Elastic query Yes. Only to SQL Server and SQL
Database without distributed
transactions.

Linked servers that read from files No. Use BULK INSERT or No. Use BULK INSERT or
(CSV, Excel) OPENROWSET as an alternative for OPENROWSET as an alternative for
CSV format. CSV format. Track these requests on
SQL Managed Instance feedback item

Log shipping High availability is included with every Natively built in as a part of Azure
database. Disaster recovery is Data Migration Service (DMS)
discussed in Overview of business migration process. Natively built for
continuity. custom data migration projects as an
external Log Replay Service (LRS).
Not available as High availability
solution, because other High
availability methods are included with
every database and it is not
recommended to use Log-shipping as
HA alternative. Disaster recovery is
discussed in Overview of business
continuity. Not available as a
replication mechanism between
databases - use secondary replicas on
Business Critical tier, auto-failover
groups, or transactional replication as
the alternatives.

Logins and users Yes, but CREATE and ALTER login Yes, with some differences. Windows
statements do not offer all the options logins are not supported and they
(no Windows and server-level Azure should be replaced with Azure Active
Active Directory logins). Directory logins.
EXECUTE AS LOGIN is not supported
- use EXECUTE AS USER instead.

Minimal logging in bulk import No, only Full Recovery model is No, only Full Recovery model is
supported. supported.

Modifying system data No Yes

OLE Automation No No

OPENDATASOURCE No Yes, only to SQL Database, SQL


Managed Instance and SQL Server. See
T-SQL differences

OPENQUERY No Yes, only to SQL Database, SQL


Managed Instance and SQL Server. See
T-SQL differences

OPENROWSET Yes, only to import from Azure Blob Yes, only to SQL Database, SQL
storage. Managed Instance and SQL Server,
and to import from Azure Blob
storage. See T-SQL differences

Operators Most - see individual operators Yes - see T-SQL differences


F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Polybase No. You can query data in the files No. You can query data in the files
placed on Azure Blob Storage using placed on Azure Blob Storage using
OPENROWSET function or use an OPENROWSET function, a linked server
external table that references a that references serverless SQL pool in
serverless SQL pool in Synapse Synapse Analytics, SQL Database, or
Analytics. SQL Server.

Query Notifications No Yes

Machine Learning Services (Formerly R No Yes, see Machine Learning Services in


Services) Azure SQL Managed Instance

Recovery models Only Full Recovery that guarantees Only Full Recovery that guarantees
high availability is supported. Simple high availability is supported. Simple
and Bulk Logged recovery models are and Bulk Logged recovery models are
not available. not available.

Resource governor No Yes

RESTORE statements No Yes, with mandatory FROM URL


options for the backups files placed on
Azure Blob Storage. See Restore
differences

Restore database from backup From automated backups only - see From automated backups - see SQL
SQL Database recovery Database recovery and from full
backups placed on Azure Blob Storage
- see Backup differences

Restore database to SQL Server No. Use BACPAC or BCP instead of No, because SQL Server database
native restore. engine used in SQL Managed Instance
has higher version than any RTM
version of SQL Server used on-
premises. Use BACPAC, BCP, or
Transactional replication instead.

Semantic search No No

Service Broker No Yes, but only within the instance. If you


are using remote Service Broker
routes, try to consolidate databases
from several distributed SQL Server
instances into one SQL Managed
Instance during migration and use
only local routes. See Service Broker
differences

Server configuration settings No Yes - see T-SQL differences

Set statements Most - see individual statements Yes - see T-SQL differences

SQL Server Agent No - see Elastic jobs (preview) Yes - see SQL Server Agent differences

SQL Server Auditing No - see SQL Database auditing Yes - see Auditing differences
F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

System stored functions Most - see individual functions Yes - see Stored procedures, functions,
triggers differences

System stored procedures Some - see individual stored Yes - see Stored procedures, functions,
procedures triggers differences

System tables Some - see individual tables Yes - see T-SQL differences

System catalog views Some - see individual views Yes - see T-SQL differences

TempDB Yes. 32-GB size per core for every Yes. 24-GB size per vCore for entire GP
database. tier and limited by instance size on BC
tier

Temporary tables Local and database-scoped global Local and instance-scoped global
temporary tables temporary tables

Time zone choice No Yes, and it must be configured when


the SQL Managed Instance is created.

Trace flags No Yes, but only limited set of global trace


flags. See DBCC differences

Transactional Replication Yes, Transactional and snapshot Yes, in public preview. See the
replication subscriber only constraints here.

Transparent data encryption (TDE) Yes - General Purpose, Business Yes


Critical, and Hyperscale (in preview)
service tiers only

Windows authentication No Yes, see Windows Authentication for


Azure Active Directory principals
(Preview).

Windows Server Failover Clustering No. Other techniques that provide No. Other techniques that provide
high availability are included with high availability are included with
every database. Disaster recovery is every database. Disaster recovery is
discussed in Overview of business discussed in Overview of business
continuity with Azure SQL Database. continuity with Azure SQL Database.

Platform capabilities
The Azure platform provides a number of PaaS capabilities that are added as an additional value to the standard
database features. There is a number of external services that can be used with Azure SQL Database.

P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Active geo-replication Yes - all service tiers. Public Preview in No, see Auto-failover groups as an
Hyperscale. alternative.

Auto-failover groups Yes - all service tiers. Public Preview in Yes, see Auto-failover groups.
Hyperscale.
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Auto-scale Yes, but only in serverless model. In No, you need to choose reserved
the non-serverless model, the change compute and storage. The change of
of service tier (change of vCore, service tier (vCore or max storage) is
storage, or DTU) is fast and online. The online and requires minimal or no
service tier change requires minimal or downtime.
no downtime.

Automatic backups Yes. Full backups are taken every 7 Yes. Full backups are taken every 7
days, differential 12 hours, and log days, differential 12 hours, and log
backups every 5-10 min. backups every 5-10 min.

Automatic tuning (indexes) Yes No

Availability Zones Yes No

Azure Resource Health Yes No

Backup retention Yes. 7 days default, max 35 days. Yes. 7 days default, max 35 days.
Hyperscale backups are currently
limited to a 7 day retention period.

Data Migration Service (DMS) Yes Yes

Elastic jobs Yes - see Elastic jobs (preview) No (SQL Agent can be used instead).

File system access No. Use BULK INSERT or No. Use BULK INSERT or
OPENROWSET to access and load data OPENROWSET to access and load data
from Azure Blob Storage as an from Azure Blob Storage as an
alternative. alternative.

Geo-restore Yes Yes

Hyperscale architecture Yes No

Long-term backup retention - LTR Yes, keep automatically taken backups Yes, keep automatically taken backups
up to 10 years. Long-term retention up to 10 years.
policies are not yet supported for
Hyperscale databases.

Pause/resume Yes, in serverless model No

Policy-based management No No

Public IP address Yes. The access can be restricted using Yes. Needs to be explicitly enabled and
firewall or service endpoints. port 3342 must be enabled in NSG
rules. Public IP can be disabled if
needed. See Public endpoint for more
details.

Point in time database restore Yes - all service tiers. See SQL Yes - see SQL Database recovery
Database recovery
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Resource pools Yes, as Elastic pools Yes. A single instance of SQL Managed
Instance can have multiple databases
that share the same pool of resources.
In addition, you can deploy multiple
instances of SQL Managed Instance in
instance pools (preview) that can share
the resources.

Scaling up or down (online) Yes, you can either change DTU or Yes, you can change reserved vCores
reserved vCores or max storage with or max storage with the minimal
the minimal downtime. downtime.

SQL Alias No, use DNS Alias No, use Cliconfg to set up alias on the
client machines.

SQL Analytics Yes Yes

SQL Data Sync Yes No

SQL Server Analysis Services (SSAS) No, Azure Analysis Services is a No, Azure Analysis Services is a
separate Azure cloud service. separate Azure cloud service.

SQL Server Integration Services (SSIS) Yes, with a managed SSIS in Azure Yes, with a managed SSIS in Azure
Data Factory (ADF) environment, Data Factory (ADF) environment,
where packages are stored in SSISDB where packages are stored in SSISDB
hosted by Azure SQL Database and hosted by SQL Managed Instance and
executed on Azure SSIS Integration executed on Azure SSIS Integration
Runtime (IR), see Create Azure-SSIS IR Runtime (IR), see Create Azure-SSIS IR
in ADF. in ADF.

To compare the SSIS features in SQL To compare the SSIS features in SQL
Database and SQL Managed Instance, Database and SQL Managed Instance,
see Compare SQL Database to SQL see Compare SQL Database to SQL
Managed Instance. Managed Instance.

SQL Server Reporting Services (SSRS) No - see Power BI No - use Power BI paginated reports
instead or host SSRS on an Azure VM.
While SQL Managed Instance cannot
run SSRS as a service, it can host SSRS
catalog databases for a reporting
server installed on Azure Virtual
Machine, using SQL Server
authentication.

Query Performance Insights (QPI) Yes No. Use built-in reports in SQL Server
Management Studio and Azure Data
Studio.

VNet Partial, it enables restricted access Yes, SQL Managed Instance is injected
using VNet Endpoints in customer's VNet. See subnet and
VNet

VNet Service endpoint Yes Yes

VNet Global peering Yes, using Private IP and service Yes, using Virtual network peering.
endpoints
P L AT F O RM F EAT URE A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Private connectivity Yes, using Private Link Yes, using VNet.

Tools
Azure SQL Database and Azure SQL Managed Instance support various data tools that can help you manage
your data.

TO O L A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Azure portal Yes Yes

Azure CLI Yes Yes

Azure Data Studio Yes Yes

Azure PowerShell Yes Yes

BACPAC file (export) Yes - see SQL Database export Yes - see SQL Managed Instance
export

BACPAC file (import) Yes - see SQL Database import Yes - see SQL Managed Instance
import

Data Quality Services (DQS) No No

Master Data Services (MDS) No No. Host MDS on an Azure VM. While
SQL Managed Instance cannot run
MDS as a service, it can host MDS
databases for a MDS service installed
on Azure Virtual Machine, using SQL
Server authentication.

SMO Yes Yes version 150

SQL Server Data Tools (SSDT) Yes Yes

SQL Server Management Studio Yes Yes version 18.0 and higher
(SSMS)

SQL Server PowerShell Yes Yes

SQL Server Profiler No - see Extended events Yes

System Center Operations Manager Yes Yes

Migration methods
You can use different migration methods to move your data between SQL Server, Azure SQL Database and
Azure SQL Managed Instance. Some methods are Online and picking-up all changes that are made on the
source while you are running migration, while in Offline methods you need to stop your workload that is
modifying data on the source while the migration is in progress.
SO URC E A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

SQL Server (on-prem, AzureVM, Online: Transactional Replication Online: Data Migration Service (DMS),
Amazon RDS) Offline: Data Migration Service Transactional Replication
(DMS), BACPAC file (import), BCP Offline: Native backup/restore,
BACPAC file (import), BCP, Snapshot
replication

Single database Offline: BACPAC file (import), BCP Offline: BACPAC file (import), BCP

SQL Managed Instance Online: Transactional Replication Online: Transactional Replication


Offline: BACPAC file (import), BCP, Offline: Cross-instance point-in-time
Snapshot replication restore (Azure PowerShell or Azure
CLI), Native backup/restore, BACPAC
file (import), BCP, Snapshot replication

Next steps
Microsoft continues to add features to Azure SQL Database. Visit the Service Updates webpage for Azure for the
newest updates using these filters:
Filtered to Azure SQL Database.
Filtered to General Availability (GA) announcements for SQL Database features.
For more information about Azure SQL Database and Azure SQL Managed Instance, see:
What is Azure SQL Database?
What is Azure SQL Managed Instance?
What is an Azure SQL Managed Instance pool?
Multi-model capabilities of Azure SQL Database
and SQL Managed Instance
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Multi-model databases enable you to store and work with data in multiple formats, such as relational data,
graph, JSON or XML documents, spatial data, and key-value pairs.
The Azure SQL family of products uses a relational model that provides the best performance for a variety of
general-purpose applications. However, Azure SQL products like Azure SQL Database and SQL Managed
Instance are not limited to relational data. They enable you to use non-relational formats that are tightly
integrated into the relational model.
Consider using the multi-model capabilities of Azure SQL in the following cases:
You have some information or structures that are a better fit for NoSQL models, and you don't want to use a
separate NoSQL database.
A majority of your data is suitable for a relational model, and you need to model some parts of your data in a
NoSQL style.
You want to use the Transact-SQL language to query and analyze both relational and NoSQL data, and then
integrate that data with tools and applications that can use the SQL language.
You want to apply database features such as in-memory technologies to improve the performance of your
analytics or the processing of your NoSQL data structures. You can use transactional replication or readable
replicas to create copies of your data and offload some analytic workloads from the primary database.
The following sections describe the most important multi-model capabilities of Azure SQL.

NOTE
You can use JSONPath expressions, XQuery/XPath expressions, spatial functions, and graph query expressions in the same
Transact-SQL query to access any data that you stored in the database. Any tool or programming language that can
execute Transact-SQL queries can also use that query interface to access multi-model data. This is the key difference from
multi-model databases such as Azure Cosmos DB, which provide specialized APIs for data models.

Graph features
Azure SQL products offer graph database capabilities to model many-to-many relationships in a database. A
graph is a collection of nodes (or vertices) and edges (or relationships). A node represents an entity (for
example, a person or an organization). An edge represents a relationship between the two nodes that it connects
(for example, likes or friends).
Here are some features that make a graph database unique:
Edges are first-class entities in a graph database. They can have attributes or properties associated with them.
A single edge can flexibly connect multiple nodes in a graph database.
You can express pattern matching and multi-hop navigation queries easily.
You can express transitive closure and polymorphic queries easily.
Graph relationships and graph query capabilities are integrated into Transact-SQL and receive the benefits of
using the SQL Server database engine as the foundational database management system. Graph features use
standard Transact-SQL queries enhanced with the graph MATCH operator to query the graph data.
A relational database can achieve anything that a graph database can. However, a graph database can make it
easier to express certain queries. Your decision to choose one over the other can be based on the following
factors:
You need to model hierarchical data where one node can have multiple parents, so you can't use the
hierarchyId data type.
Your application has complex many-to-many relationships. As the application evolves, new relationships are
added.
You need to analyze interconnected data and relationships.
You want to use graph-specific T-SQL search conditions such as SHORTEST_PATH.

JSON features
In Azure SQL products, you can parse and query data represented in JavaScript Object Notation (JSON) format,
and export your relational data as JSON text. JSON is a core feature of the SQL Server database engine.
JSON features enable you to put JSON documents in tables, transform relational data into JSON documents,
and transform JSON documents into relational data. You can use the standard Transact-SQL language enhanced
with JSON functions for parsing documents. You can also use non-clustered indexes, columnstore indexes, or
memory-optimized tables to optimize your queries.
JSON is a popular data format for exchanging data in modern web and mobile applications. JSON is also used
for storing semistructured data in log files or in NoSQL databases. Many REST web services return results
formatted as JSON text or accept data formatted as JSON.
Most Azure services have REST endpoints that return or consume JSON. These services include Azure Cognitive
Search, Azure Storage, and Azure Cosmos DB.
If you have JSON text, you can extract data from JSON or verify that JSON is properly formatted by using the
built-in functions JSON_VALUE, JSON_QUERY, and ISJSON. The other functions are:
JSON_MODIFY: Lets you update values inside JSON text.
OPENJSON: Can transform an array of JSON objects into a set of rows, for more advanced querying and
analysis. Any SQL query can be executed on the returned result set.
FOR JSON: Lets you format data stored in your relational tables as JSON text.

For more information, see How to work with JSON data.


You can use document models instead of the relational models in some specific scenarios:
High normalization of the schema doesn't bring significant benefits because you access all the fields of the
objects at once, or you never update normalized parts of the objects. However, the normalized model
increases the complexity of your queries because you need to join a large number of tables to get the data.
You're working with applications that natively use JSON documents for communication or data models, and
you don't want to introduce more layers that transform relational data into JSON and vice versa.
You need to simplify your data model by denormalizing child tables or Entity-Object-Value patterns.
You need to load or export data stored in JSON format without an additional tool that parses the data.

XML features
XML features enable you to store and index XML data in your database and use native XQuery/XPath operations
to work with XML data. Azure SQL products have a specialized, built-in XML data type and query functions that
process XML data.
The SQL Server database engine provides a powerful platform for developing applications to manage
semistructured data. Support for XML is integrated into all the components of the database engine and includes:
The ability to store XML values natively in an XML data-type column that can be typed according to a
collection of XML schemas or left untyped. You can index the XML column.
The ability to specify an XQuery query against XML data stored in columns and variables of the XML type.
You can use XQuery functionalities in any Transact-SQL query that accesses a data model that you use in
your database.
Automatic indexing of all elements in XML documents by using the primary XML index. Or you can specify
the exact paths that should be indexed by using the secondary XML index.
OPENROWSET , which allows the bulk loading of XML data.
The ability to transform relational data into XML format.
You can use document models instead of the relational models in some specific scenarios:
High normalization of the schema doesn't bring significant benefits because you access all the fields of the
objects at once, or you never update normalized parts of the objects. However, the normalized model
increases the complexity of your queries because you need to join a large number of tables to get the data.
You're working with applications that natively use XML documents for communication or data models, and
you don't want to introduce more layers that transform relational data into JSON and vice versa.
You need to simplify your data model by denormalizing child tables or Entity-Object-Value patterns.
You need to load or export data stored in XML format without an additional tool that parses the data.

Spatial features
Spatial data represents information about the physical location and shape of objects. These objects can be point
locations or more complex objects such as countries/regions, roads, or lakes.
Azure SQL supports two spatial data types:
The geometry type represents data in a Euclidean (flat) coordinate system.
The geography type represents data in a round-earth coordinate system.
Spatial features in Azure SQL enable you to store geometrical and geographical data. You can use spatial objects
in Azure SQL to parse and query data represented in JSON format, and export your relational data as JSON text.
These spatial objects include Point, LineString, and Polygon. Azure SQL also provides specialized spatial indexes
that you can use to improve the performance of your spatial queries.
Spatial support is a core feature of the SQL Server database engine.
Key-value pairs
Azure SQL products don't have specialized types or structures that support key-value pairs, because key-value
structures can be natively represented as standard relational tables:

CREATE TABLE Collection (


Id int identity primary key,
Data nvarchar(max)
)

You can customize this key-value structure to fit your needs without any constraints. As an example, the value
can be an XML document instead of the nvarchar(max) type. If the value is a JSON document, you can use a
CHECK constraint that verifies the validity of JSON content. You can put any number of values related to one key
in the additional columns. For example:
Add computed columns and indexes to simplify and optimize data access.
Define the table as a memory-optimized, schema-only table to get better performance.
For an example of how a relational model can be effectively used as a key-value pair solution in practice, see
How bwin is using SQL Server 2016 In-Memory OLTP to achieve unprecedented performance and scale. In this
case study, bwin used a relational model for its ASP.NET caching solution to achieve 1.2 million batches per
second.

Next steps
Multi-model capabilities are core SQL Server database engine features that are shared among Azure SQL
products. To learn more about these features, see these articles:
Graph processing with SQL Server and Azure SQL Database
JSON data in SQL Server
Spatial data in SQL Server
XML data in SQL Server
Key-value store performance in Azure SQL Database
Optimize performance by using in-memory
technologies in Azure SQL Database and Azure
SQL Managed Instance
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In-memory technologies enable you to improve performance of your application, and potentially reduce cost of
your database.

When to use in-memory technologies


By using in-memory technologies, you can achieve performance improvements with various workloads:
Transactional (online transactional processing (OLTP)) where most of the requests read or update smaller
set of data (for example, CRUD operations).
Analytic (online analytical processing (OLAP)) where most of the queries have complex calculations for the
reporting purposes, with a certain number of queries that load and append data to the existing tables (so
called bulk-load), or delete the data from the tables.
Mixed (hybrid transaction/analytical processing (HTAP)) where both OLTP and OLAP queries are executed on
the same set of data.
In-memory technologies can improve performance of these workloads by keeping the data that should be
processed into the memory, using native compilation of the queries, or advanced processing such as batch
processing and SIMD instructions that are available on the underlying hardware.

Overview
Azure SQL Database and Azure SQL Managed Instance have the following in-memory technologies:
In-Memory OLTP increases number of transactions per second and reduces latency for transaction
processing. Scenarios that benefit from In-Memory OLTP are: high-throughput transaction processing such
as trading and gaming, data ingestion from events or IoT devices, caching, data load, and temporary table
and table variable scenarios.
Clustered columnstore indexes reduce your storage footprint (up to 10 times) and improve performance for
reporting and analytics queries. You can use it with fact tables in your data marts to fit more data in your
database and improve performance. Also, you can use it with historical data in your operational database to
archive and be able to query up to 10 times more data.
Nonclustered columnstore indexes for HTAP help you to gain real-time insights into your business through
querying the operational database directly, without the need to run an expensive extract, transform, and load
(ETL) process and wait for the data warehouse to be populated. Nonclustered columnstore indexes allow fast
execution of analytics queries on the OLTP database, while reducing the impact on the operational workload.
Memory-optimized clustered columnstore indexes for HTAP enables you to perform fast transaction
processing, and to concurrently run analytics queries very quickly on the same data.
Both columnstore indexes and In-Memory OLTP have been part of the SQL Server product since 2012 and
2014, respectively. Azure SQL Database, Azure SQL Managed Instance, and SQL Server share the same
implementation of in-memory technologies.
Benefits of in-memory technology
Because of the more efficient query and transaction processing, in-memory technologies also help you to
reduce cost. You typically don't need to upgrade the pricing tier of the database to achieve performance gains. In
some cases, you might even be able reduce the pricing tier, while still seeing performance improvements with
in-memory technologies.
By using In-Memory OLTP, Quorum Business Solutions was able to double their workload while improving DTUs
by 70%. For more information, see the blog post: In-Memory OLTP.

NOTE
In-memory technologies are available in the Premium and Business Critical tiers.

This article describes aspects of In-Memory OLTP and columnstore indexes that are specific to Azure SQL
Database and Azure SQL Managed Instance, and also includes samples:
You'll see the impact of these technologies on storage and data size limits.
You'll see how to manage the movement of databases that use these technologies between the different
pricing tiers.
You'll see two samples that illustrate the use of In-Memory OLTP, as well as columnstore indexes.
For more information about in-memory in SQL Server, see:
In-Memory OLTP Overview and Usage Scenarios (includes references to customer case studies and
information to get started)
Documentation for In-Memory OLTP
Columnstore Indexes Guide
Hybrid transactional/analytical processing (HTAP), also known as real-time operational analytics

In-Memory OLTP
In-Memory OLTP technology provides extremely fast data access operations by keeping all data in memory. It
also uses specialized indexes, native compilation of queries, and latch-free data-access to improve performance
of the OLTP workload. There are two ways to organize your In-Memory OLTP data:
Memor y-optimized rowstore format where every row is a separate memory object. This is a classic
In-Memory OLTP format optimized for high-performance OLTP workloads. There are two types of
memory-optimized tables that can be used in the memory-optimized rowstore format:
Durable tables (SCHEMA_AND_DATA) where the rows placed in memory are preserved after server
restart. This type of tables behaves like a traditional rowstore table with the additional benefits of in-
memory optimizations.
Non-durable tables (SCHEMA_ONLY) where the rows are not-preserved after restart. This type of
table is designed for temporary data (for example, replacement of temp tables), or tables where you
need to quickly load data before you move it to some persisted table (so called staging tables).
Memor y-optimized columnstore format where data is organized in a columnar format. This structure
is designed for HTAP scenarios where you need to run analytic queries on the same data structure where
your OLTP workload is running.
NOTE
In-Memory OLTP technology is designed for the data structures that can fully reside in memory. Since the In-memory
data cannot be offloaded to disk, make sure that you are using database that has enough memory. See Data size and
storage cap for In-Memory OLTP for more details.

A quick primer on In-Memory OLTP: Quickstart 1: In-Memory OLTP Technologies for Faster T-SQL
Performance.
There is a programmatic way to understand whether a given database supports In-Memory OLTP. You can
execute the following Transact-SQL query:

SELECT DatabasePropertyEx(DB_NAME(), 'IsXTPSupported');

If the query returns 1 , In-Memory OLTP is supported in this database. The following queries identify all objects
that need to be removed before a database can be downgraded to General Purpose, Standard, or Basic:

SELECT * FROM sys.tables WHERE is_memory_optimized=1


SELECT * FROM sys.table_types WHERE is_memory_optimized=1
SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1

Data size and storage cap for In-Memory OLTP


In-Memory OLTP includes memory-optimized tables, which are used for storing user data. These tables are
required to fit in memory. Because you manage memory directly in SQL Database, we have the concept of a
quota for user data. This idea is referred to as In-Memory OLTP storage.
Each supported single database pricing tier and each elastic pool pricing tier includes a certain amount of In-
Memory OLTP storage.
DTU-based resource limits - single database
DTU-based resource limits - elastic pools
vCore-based resource limits - single databases
vCore-based resource limits - elastic pools
vCore-based resource limits - managed instance
The following items count toward your In-Memory OLTP storage cap:
Active user data rows in memory-optimized tables and table variables. Note that old row versions don't
count toward the cap.
Indexes on memory-optimized tables.
Operational overhead of ALTER TABLE operations.
If you hit the cap, you receive an out-of-quota error, and you are no longer able to insert or update data. To
mitigate this error, delete data or increase the pricing tier of the database or pool.
For details about monitoring In-Memory OLTP storage utilization and configuring alerts when you almost hit
the cap, see Monitor in-memory storage.
About elastic pools
With elastic pools, the In-Memory OLTP storage is shared across all databases in the pool. Therefore, the usage
in one database can potentially affect other databases. Two mitigations for this are:
Configure a Max-eDTU or MaxvCore for databases that is lower than the eDTU or vCore count for the pool as
a whole. This maximum caps the In-Memory OLTP storage utilization, in any database in the pool, to the size
that corresponds to the eDTU count.
Configure a Min-eDTU or MinvCore that is greater than 0. This minimum guarantees that each database in
the pool has the amount of available In-Memory OLTP storage that corresponds to the configured Min-eDTU
or vCore .
Changing service tiers of databases that use In-Memory OLTP technologies
You can always upgrade your database or instance to a higher tier, such as from General Purpose to Business
Critical (or Standard to Premium). The available functionality and resources only increase.
But downgrading the tier can negatively impact your database. The impact is especially apparent when you
downgrade from Business Critical to General Purpose (or Premium to Standard or Basic) when your database
contains In-Memory OLTP objects. Memory-optimized tables are unavailable after the downgrade (even if they
remain visible). The same considerations apply when you're lowering the pricing tier of an elastic pool, or
moving a database with in-memory technologies, into a General Purpose, Standard, or Basic elastic pool.

IMPORTANT
In-Memory OLTP isn't supported in the General Purpose, Standard or Basic tier. Therefore, it isn't possible to move a
database that has any In-Memory OLTP objects to one of these tiers.

Before you downgrade the database to General Purpose, Standard, or Basic, remove all memory-optimized
tables and table types, as well as all natively compiled T-SQL modules.
Scaling-down resources in Business Critical tier: Data in memory-optimized tables must fit within the In-
Memory OLTP storage that is associated with the tier of the database or the managed instance, or it is available
in the elastic pool. If you try to scale-down the tier or move the database into a pool that doesn't have enough
available In-Memory OLTP storage, the operation fails.

In-memory columnstore
In-memory columnstore technology is enabling you to store and query a large amount of data in the tables.
Columnstore technology uses column-based data storage format and batch query processing to achieve gain
up to 10 times the query performance in OLAP workloads over traditional row-oriented storage. You can also
achieve gains up to 10 times the data compression over the uncompressed data size. There are two types of
columnstore models that you can use to organize your data:
Clustered columnstore where all data in the table is organized in the columnar format. In this model, all
rows in the table are placed in columnar format that highly compresses the data and enables you to execute
fast analytical queries and reports on the table. Depending on the nature of your data, the size of your data
might be decreased 10x-100x. Clustered columnstore model also enables fast ingestion of large amount of
data (bulk-load) since large batches of data greater than 100K rows are compressed before they are stored
on disk. This model is a good choice for the classic data warehouse scenarios.
Non-clustered columnstore where the data is stored in traditional rowstore table and there is an index in
the columnstore format that is used for the analytical queries. This model enables Hybrid Transactional-
Analytic Processing (HTAP): the ability to run performant real-time analytics on a transactional workload.
OLTP queries are executed on rowstore table that is optimized for accessing a small set of rows, while OLAP
queries are executed on columnstore index that is better choice for scans and analytics. The query optimizer
dynamically chooses rowstore or columnstore format based on the query. Non-clustered columnstore
indexes don't decrease the size of the data since original data-set is kept in the original rowstore table
without any change. However, the size of additional columnstore index should be in order of magnitude
smaller than the equivalent B-tree index.
NOTE
In-memory columnstore technology keeps only the data that is needed for processing in the memory, while the data that
cannot fit into the memory is stored on-disk. Therefore, the amount of data in in-memory columnstore structures can
exceed the amount of available memory.

In-depth video about the technology:


Columnstore Index: In-memory Analytics Videos from Ignite 2016
Data size and storage for columnstore indexes
Columnstore indexes aren't required to fit in memory. Therefore, the only cap on the size of the indexes is the
maximum overall database size, which is documented in the DTU-based purchasing model and vCore-based
purchasing model articles.
When you use clustered columnstore indexes, columnar compression is used for the base table storage. This
compression can significantly reduce the storage footprint of your user data, which means that you can fit more
data in the database. And the compression can be further increased with columnar archival compression. The
amount of compression that you can achieve depends on the nature of the data, but 10 times the compression
is not uncommon.
For example, if you have a database with a maximum size of 1 terabyte (TB) and you achieve 10 times the
compression by using columnstore indexes, you can fit a total of 10 TB of user data in the database.
When you use nonclustered columnstore indexes, the base table is still stored in the traditional rowstore format.
Therefore, the storage savings aren't as significant as with clustered columnstore indexes. However, if you're
replacing a number of traditional nonclustered indexes with a single columnstore index, you can still see an
overall savings in the storage footprint for the table.
Changing service tiers of databases containing Columnstore indexes
Downgrading single database to Basic or Standard might not be possible if your target tier is below S3.
Columnstore indexes are supported only on the Business Critical/Premium pricing tier and on the Standard tier,
S3 and above, and not on the Basic tier. When you downgrade your database to an unsupported tier or level,
your columnstore index becomes unavailable. The system maintains your columnstore index, but it never
leverages the index. If you later upgrade back to a supported tier or level, your columnstore index is
immediately ready to be leveraged again.
If you have a clustered columnstore index, the whole table becomes unavailable after the downgrade.
Therefore we recommend that you drop all clustered columnstore indexes before you downgrade your database
to an unsupported tier or level.

NOTE
SQL Managed Instance supports Columnstore indexes in all tiers.

Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP
Try in-memory features
Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP in
SQL Database
In-Memory OLTP Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns where
In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (in-memory optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Getting started with temporal tables in Azure SQL
Database and Azure SQL Managed Instance
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Temporal tables are a programmability feature of Azure SQL Database and Azure SQL Managed Instance that
allows you to track and analyze the full history of changes in your data, without the need for custom coding.
Temporal tables keep data closely related to time context so that stored facts can be interpreted as valid only
within the specific period. This property of temporal tables allows for efficient time-based analysis and getting
insights from data evolution.

Temporal scenario
This article illustrates the steps to utilize temporal tables in an application scenario. Suppose that you want to
track user activity on a new website that is being developed from scratch or on an existing website that you
want to extend with user activity analytics. In this simplified example, we assume that the number of visited web
pages during a period of time is an indicator that needs to be captured and monitored in the website database
that is hosted on Azure SQL Database or Azure SQL Managed Instance. The goal of the historical analysis of user
activity is to get inputs to redesign website and provide better experience for the visitors.
The database model for this scenario is very simple - user activity metric is represented with a single integer
field, PageVisited , and is captured along with basic information on the user profile. Additionally, for time-based
analysis, you would keep a series of rows for each user, where every row represents the number of pages a
particular user visited within a specific period of time.

Fortunately, you do not need to put any effort in your app to maintain this activity information. With temporal
tables, this process is automated - giving you full flexibility during website design and more time to focus on the
data analysis itself. The only thing you have to do is to ensure that WebSiteInfo table is configured as temporal
system-versioned. The exact steps to utilize temporal tables in this scenario are described below.

Step 1: Configure tables as temporal


Depending on whether you are starting new development or upgrading existing application, you will either
create temporal tables or modify existing ones by adding temporal attributes. In general case, your scenario can
be a mix of these two options. Perform these action using SQL Server Management Studio (SSMS), SQL Server
Data Tools (SSDT), Azure Data Studio, or any other Transact-SQL development tool.
IMPORTANT
It is recommended that you always use the latest version of Management Studio to remain synchronized with updates to
Azure SQL Database and Azure SQL Managed Instance. Update SQL Server Management Studio.

Create new table


Use context menu item "New System-Versioned Table" in SSMS Object Explorer to open the query editor with a
temporal table template script and then use "Specify Values for Template Parameters" (Ctrl+Shift+M) to
populate the template:

In SSDT, choose "Temporal Table (System-Versioned)" template when adding new items to the database project.
That will open table designer and enable you to easily specify the table layout:
You can also create temporal table by specifying the Transact-SQL statements directly, as shown in the example
below. Note that the mandatory elements of every temporal table are the PERIOD definition and the
SYSTEM_VERSIONING clause with a reference to another user table that will store historical row versions:

CREATE TABLE WebsiteUserInfo


(
[UserID] int NOT NULL PRIMARY KEY CLUSTERED
, [UserName] nvarchar(100) NOT NULL
, [PagesVisited] int NOT NULL
, [ValidFrom] datetime2 (0) GENERATED ALWAYS AS ROW START
, [ValidTo] datetime2 (0) GENERATED ALWAYS AS ROW END
, PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));

When you create system-versioned temporal table, the accompanying history table with the default
configuration is automatically created. The default history table contains a clustered B-tree index on the period
columns (end, start) with page compression enabled. This configuration is optimal for the majority of scenarios
in which temporal tables are used, especially for data auditing.
In this particular case, we aim to perform time-based trend analysis over a longer data history and with bigger
data sets, so the storage choice for the history table is a clustered columnstore index. A clustered columnstore
provides very good compression and performance for analytical queries. Temporal tables give you the flexibility
to configure indexes on the current and temporal tables completely independently.

NOTE
Columnstore indexes are available in the Business Critical, General Purpose, and Premium tiers and in the Standard tier, S3
and above.

The following script shows how default index on history table can be changed to the clustered columnstore:

CREATE CLUSTERED COLUMNSTORE INDEX IX_WebsiteUserInfoHistory


ON dbo.WebsiteUserInfoHistory
WITH (DROP_EXISTING = ON);

Temporal tables are represented in the Object Explorer with the specific icon for easier identification, while its
history table is displayed as a child node.
Alter existing table to temporal
Let's cover the alternative scenario in which the WebsiteUserInfo table already exists, but was not designed to
keep a history of changes. In this case, you can simply extend the existing table to become temporal, as shown in
the following example:

ALTER TABLE WebsiteUserInfo


ADD
ValidFrom datetime2 (0) GENERATED ALWAYS AS ROW START HIDDEN
constraint DF_ValidFrom DEFAULT DATEADD(SECOND, -1, SYSUTCDATETIME())
, ValidTo datetime2 (0) GENERATED ALWAYS AS ROW END HIDDEN
constraint DF_ValidTo DEFAULT '9999.12.31 23:59:59.99'
, PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);

ALTER TABLE WebsiteUserInfo


SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.WebsiteUserInfoHistory));
GO

CREATE CLUSTERED COLUMNSTORE INDEX IX_WebsiteUserInfoHistory


ON dbo.WebsiteUserInfoHistory
WITH (DROP_EXISTING = ON);

Step 2: Run your workload regularly


The main advantage of temporal tables is that you do not need to change or adjust your website in any way to
perform change tracking. Once created, temporal tables transparently persist previous row versions every time
you perform modifications on your data.
In order to leverage automatic change tracking for this particular scenario, let's just update column
PagesVisited every time a user ends their session on the website:
UPDATE WebsiteUserInfo SET [PagesVisited] = 5
WHERE [UserID] = 1;

It is important to notice that the update query doesn't need to know the exact time when the actual operation
occurred nor how historical data will be preserved for future analysis. Both aspects are automatically handled by
Azure SQL Database and Azure SQL Managed Instance. The following diagram illustrates how history data is
being generated on every update.

Step 3: Perform historical data analysis


Now when temporal system-versioning is enabled, historical data analysis is just one query away from you. In
this article, we will provide a few examples that address common analysis scenarios - to learn all details, explore
various options introduced with the FOR SYSTEM_TIME clause.
To see the top 10 users ordered by the number of visited web pages as of an hour ago, run this query:

DECLARE @hourAgo datetime2 = DATEADD(HOUR, -1, SYSUTCDATETIME());


SELECT TOP 10 * FROM dbo.WebsiteUserInfo FOR SYSTEM_TIME AS OF @hourAgo
ORDER BY PagesVisited DESC

You can easily modify this query to analyze the site visits as of a day ago, a month ago or at any point in the past
you wish.
To perform basic statistical analysis for the previous day, use the following example:

DECLARE @twoDaysAgo datetime2 = DATEADD(DAY, -2, SYSUTCDATETIME());


DECLARE @aDayAgo datetime2 = DATEADD(DAY, -1, SYSUTCDATETIME());

SELECT UserID, SUM (PagesVisited) as TotalVisitedPages, AVG (PagesVisited) as AverageVisitedPages,


MAX (PagesVisited) AS MaxVisitedPages, MIN (PagesVisited) AS MinVisitedPages,
STDEV (PagesVisited) as StDevViistedPages
FROM dbo.WebsiteUserInfo
FOR SYSTEM_TIME BETWEEN @twoDaysAgo AND @aDayAgo
GROUP BY UserId

To search for activities of a specific user, within a period of time, use the CONTAINED IN clause:
DECLARE @hourAgo datetime2 = DATEADD(HOUR, -1, SYSUTCDATETIME());
DECLARE @twoHoursAgo datetime2 = DATEADD(HOUR, -2, SYSUTCDATETIME());
SELECT * FROM dbo.WebsiteUserInfo
FOR SYSTEM_TIME CONTAINED IN (@twoHoursAgo, @hourAgo)
WHERE [UserID] = 1;

Graphic visualization is especially convenient for temporal queries as you can show trends and usage patterns in
an intuitive way very easily:

Evolving table schema


Typically, you will need to change the temporal table schema while you are doing app development. For that,
simply run regular ALTER TABLE statements and Azure SQL Database or Azure SQL Managed Instance
appropriately propagates changes to the history table. The following script shows how you can add additional
attribute for tracking:

/*Add new column for tracking source IP address*/


ALTER TABLE dbo.WebsiteUserInfo
ADD [IPAddress] varchar(128) NOT NULL CONSTRAINT DF_Address DEFAULT 'N/A';

Similarly, you can change column definition while your workload is active:

/*Increase the length of name column*/


ALTER TABLE dbo.WebsiteUserInfo
ALTER COLUMN UserName nvarchar(256) NOT NULL;

Finally, you can remove a column that you do not need anymore.

/*Drop unnecessary column */


ALTER TABLE dbo.WebsiteUserInfo
DROP COLUMN TemporaryColumn;
Alternatively, use latest SSDT to change temporal table schema while you are connected to the database (online
mode) or as part of the database project (offline mode).

Controlling retention of historical data


With system-versioned temporal tables, the history table may increase the database size more than regular
tables. A large and ever-growing history table can become an issue both due to pure storage costs as well as
imposing a performance tax on temporal querying. Hence, developing a data retention policy for managing data
in the history table is an important aspect of planning and managing the lifecycle of every temporal table. With
Azure SQL Database and Azure SQL Managed Instance, you have the following approaches for managing
historical data in the temporal table:
Table Partitioning
Custom Cleanup Script

Next steps
For more information on temporal tables, see check out Temporal Tables.
Dynamically scale database resources with minimal
downtime
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and SQL Managed Instance enable you to dynamically add more resources to your
database with minimal downtime; however, there is a switch over period where connectivity is lost to the
database for a short amount of time, which can be mitigated using retry logic.

Overview
When demand for your app grows from a handful of devices and customers to millions, Azure SQL Database
and SQL Managed Instance scale on the fly with minimal downtime. Scalability is one of the most important
characteristics of platform as a service (PaaS) that enables you to dynamically add more resources to your
service when needed. Azure SQL Database enables you to easily change resources (CPU power, memory, IO
throughput, and storage) allocated to your databases.
You can mitigate performance issues due to increased usage of your application that cannot be fixed using
indexing or query rewrite methods. Adding more resources enables you to quickly react when your database
hits the current resource limits and needs more power to handle the incoming workload. Azure SQL Database
also enables you to scale-down the resources when they are not needed to lower the cost.
You don't need to worry about purchasing hardware and changing underlying infrastructure. Scaling a database
can be easily done via the Azure portal using a slider.

Azure SQL Database offers the DTU-based purchasing model and the vCore-based purchasing model, while
Azure SQL Managed Instance offers just the vCore-based purchasing model.
The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service
tiers to support lightweight to heavyweight database workloads: Basic, Standard, and Premium. Performance
levels within each tier provide a different mix of these resources, to which you can add additional storage
resources.
The vCore-based purchasing model lets you choose the number of vCores, the amount or memory, and the
amount and speed of storage. This purchasing model offers three service tiers: General Purpose, Business
Critical, and Hyperscale.
The service tier, compute tier, and resource limits for a database, elastic pool, or managed instance can be
changed at any time. For example, you can build your first app on a single database using the serverless
compute tier and then change its service tier manually or programmatically at any time, to the provisioned
compute tier, to meet the needs of your solution.

NOTE
Notable exceptions where you cannot change the service tier of a database are:
Databases using features which are only available in the Business Critical / Premium service tiers, cannot be changed
to use the General Purpose / Standard service tier.
Databases originally created in the Hyperscale service tier cannot be migrated to other service tiers. If you migrate an
existing database in Azure SQL Database to the Hyperscale service tier, you can reverse migrate to the General
Purpose service tier within 45 days of the original migration to Hyperscale. If you wish to migrate the database to
another service tier, such as Business Critical, first reverse migrate to the General Purpose service tier, then perform a
further migration. Learn more in How to reverse migrate from Hyperscale.

You can adjust the resources allocated to your database by changing service objective, or scaling, to meet
workload demands. This also enables you to only pay for the resources that you need, when you need them.
Please refer to the note on the potential impact that a scale operation might have on an application.

NOTE
Dynamic scalability is different from autoscale. Autoscale is when a service scales automatically based on criteria, whereas
dynamic scalability allows for manual scaling with a minimal downtime. Single databases in Azure SQL Database can be
scaled manually, or in the case of the Serverless tier, set to automatically scale the compute resources. Elastic pools, which
allow databases to share resources in a pool, can currently only be scaled manually.

Azure SQL Database offers the ability to dynamically scale your databases:
With a single database, you can use either DTU or vCore models to define maximum amount of resources
that will be assigned to each database.
Elastic pools enable you to define maximum resource limit per group of databases in the pool.
Azure SQL Managed Instance allows you to scale as well:
SQL Managed Instance uses vCores mode and enables you to define maximum CPU cores and maximum of
storage allocated to your instance. All databases within the managed instance will share the resources
allocated to the instance.

Impact of scale up or scale down operations


Initiating a scale up, or scale down action, in any of the flavors mentioned above, restarts the database engine
process, and moves it to a different virtual machine if needed. Moving the database engine process to a new
virtual machine is an online process during which you can continue using your existing Azure SQL Database
service. Once the target database engine is ready to process queries, open connections to the current database
engine will be terminated, and uncommitted transactions will be rolled back. New connections will be made to
the target database engine.

NOTE
It is not recommended to scale your managed instance if a long-running transaction, such as data import, data
processing jobs, index rebuild, etc., is running, or if you have any active connection on the instance. To prevent the scaling
from taking longer time to complete than usual, you should scale the instance upon the completion of all long-running
operations.
NOTE
You can expect a short connection break when the scale up/scale down process is finished. If you have implemented Retry
logic for standard transient errors, you will not notice the failover.

Alternative scale methods


Scaling resources is the easiest and the most effective way to improve performance of your database without
changing either the database or application code. In some cases, even the highest service tiers, compute sizes,
and performance optimizations might not handle your workload in a successful and cost-effective way. In that
case you have these additional options to scale your database:
Read scale-out is an available feature where you are getting one read-only replica of your data where you
can execute demanding read-only queries such as reports. A read-only replica will handle your read-only
workload without affecting resource usage on your primary database.
Database sharding is a set of techniques that enables you to split your data into several databases and scale
them independently.

Next steps
For information about improving database performance by changing database code, see Find and apply
performance recommendations.
For information about letting built-in database intelligence optimize your database, see Automatic tuning.
For information about read scale-out in Azure SQL Database, see how to use read-only replicas to load
balance read-only query workloads.
For information about a Database sharding, see Scaling out with Azure SQL Database.
For an example of using scripts to monitor and scale a single database, see Use PowerShell to monitor and
scale a single SQL Database.
Use read-only replicas to offload read-only query
workloads
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


As part of High Availability architecture, each single database, elastic pool database, and managed instance in
the Premium and Business Critical service tier is automatically provisioned with a primary read-write replica
and several secondary read-only replicas. The secondary replicas are provisioned with the same compute size as
the primary replica. The read scale-out feature allows you to offload read-only workloads using the compute
capacity of one of the read-only replicas, instead of running them on the read-write replica. This way, some
read-only workloads can be isolated from the read-write workloads, and will not affect their performance. The
feature is intended for the applications that include logically separated read-only workloads, such as analytics. In
the Premium and Business Critical service tiers, applications could gain performance benefits using this
additional capacity at no extra cost.
The read scale-out feature is also available in the Hyperscale service tier when at least one secondary replica is
added. Hyperscale secondary named replicas provide independent scaling, access isolation, workload isolation,
support for a variety of read scale-out scenarios, and other benefits. Multiple secondary HA replicas can be used
for load-balancing read-only workloads that require more resources than available on one secondary HA
replica.
The High Availability architecture of Basic, Standard, and General Purpose service tiers does not include any
replicas. The read scale-out feature is not available in these service tiers. However, geo-replicas can provide
similar functionality in these service tiers.
The following diagram illustrates the feature for Premium and Business Critical databases and managed
instances.
The read scale-out feature is enabled by default on new Premium, Business Critical, and Hyperscale databases.

NOTE
Read scale-out is always enabled in the Business Critical service tier of Managed Instance, and for Hyperscale databases
with at least one secondary replica.

If your SQL connection string is configured with ApplicationIntent=ReadOnly , the application will be redirected
to a read-only replica of that database or managed instance. For information on how to use the
ApplicationIntent property, see Specifying Application Intent.

If you wish to ensure that the application connects to the primary replica regardless of the ApplicationIntent
setting in the SQL connection string, you must explicitly disable read scale-out when creating the database or
when altering its configuration. For example, if you upgrade your database from Standard or General Purpose
tier to Premium or Business Critical and want to make sure all your connections continue to go to the primary
replica, disable read scale-out. For details on how to disable it, see Enable and disable read scale-out.

NOTE
Query Store and SQL Profiler features are not supported on read-only replicas.

Data consistency
Data changes made on the primary replica are persisted on read-only replicas synchronously or asynchronously
depending on replica type. However, for all replica types, reads from a read-only replica are always
asynchronous with respect to the primary. Within a session connected to a read-only replica, reads are always
transactionally consistent. Because data propagation latency is variable, different replicas can return data at
slightly different points in time relative to the primary and each other. If a read-only replica becomes unavailable
and a session reconnects, it may connect to a replica that is at a different point in time than the original replica.
Likewise, if an application changes data using a read-write session on the primary and immediately reads it
using a read-only session on a read-only replica, it is possible that the latest changes will not be immediately
visible.
Typical data propagation latency between the primary replica and read-only replicas varies in the range from
tens of milliseconds to single-digit seconds. However, there is no fixed upper bound on data propagation latency.
Conditions such as high resource utilization on the replica can increase latency substantially. Applications that
require guaranteed data consistency across sessions, or require committed data to be readable immediately
should use the primary replica.

NOTE
To monitor data propagation latency, see Monitoring and troubleshooting read-only replica.

Connect to a read-only replica


When you enable read scale-out for a database, the ApplicationIntent option in the connection string provided
by the client dictates whether the connection is routed to the write replica or to a read-only replica. Specifically, if
the ApplicationIntent value is ReadWrite (the default value), the connection will be directed to the read-write
replica. This is identical to the behavior when ApplicationIntent is not included in the connection string. If the
ApplicationIntent value is ReadOnly , the connection is routed to a read-only replica.

For example, the following connection string connects the client to a read-only replica (replacing the items in the
angle brackets with the correct values for your environment and dropping the angle brackets):

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=
<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

Either of the following connection strings connects the client to a read-write replica (replacing the items in the
angle brackets with the correct values for your environment and dropping the angle brackets):

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadWrite;User ID=
<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;User ID=<myLogin>;Password=
<myPassword>;Trusted_Connection=False; Encrypt=True;

Verify that a connection is to a read-only replica


You can verify whether you are connected to a read-only replica by running the following query in the context of
your database. It will return READ_ONLY when you are connected to a read-only replica.

SELECT DATABASEPROPERTYEX(DB_NAME(), 'Updateability');

NOTE
In Premium and Business Critical service tiers, only one of the read-only replicas is accessible at any given time.
Hyperscale supports multiple read-only replicas.

Monitoring and troubleshooting read-only replicas


When connected to a read-only replica, Dynamic Management Views (DMVs) reflect the state of the replica, and
can be queried for monitoring and troubleshooting purposes. The database engine provides multiple views to
expose a wide variety of monitoring data.
The following views are commonly used for replica monitoring and troubleshooting:

NAME P URP O SE

sys.dm_db_resource_stats Provides resource utilization metrics for the last hour,


including CPU, data IO, and log write utilization relative to
service objective limits.

sys.dm_os_wait_stats Provides aggregate wait statistics for the database engine


instance.

sys.dm_database_replica_states Provides replica health state and synchronization statistics.


Redo queue size and redo rate serve as indicators of data
propagation latency on the read-only replica.

sys.dm_os_performance_counters Provides database engine performance counters.

sys.dm_exec_query_stats Provides per-query execution statistics such as number of


executions, CPU time used, etc.

sys.dm_exec_query_plan() Provides cached query plans.

sys.dm_exec_sql_text() Provides query text for a cached query plan.

sys.dm_exec_query_profiles Provides real time query progress while queries are in


execution.

sys.dm_exec_query_plan_stats() Provides the last known actual execution plan including


runtime statistics for a query.

sys.dm_io_virtual_file_stats() Provides storage IOPS, throughput, and latency statistics for


all database files.

NOTE
The sys.resource_stats and sys.elastic_pool_resource_stats DMVs in the logical master database return
resource utilization data of the primary replica.

Monitoring read-only replicas with Extended Events


An extended event session cannot be created when connected to a read-only replica. However, in Azure SQL
Database, the definitions of database-scoped Extended Event sessions created and altered on the primary replica
replicate to read-only replicas, including geo-replicas, and capture events on read-only replicas.
An extended event session on a read-only replica that is based on a session definition from the primary replica
can be started and stopped independently of the primary replica. When an extended event session is dropped
on the primary replica, it is also dropped on all read-only replicas.
Transaction isolation level on read-only replicas
Queries that run on read-only replicas are always mapped to the snapshot transaction isolation level. Snapshot
isolation uses row versioning to avoid blocking scenarios where readers block writers.
In rare cases, if a snapshot isolation transaction accesses object metadata that has been modified in another
concurrent transaction, it may receive error 3961, "Snapshot isolation transaction failed in database '%.*ls'
because the object accessed by the statement has been modified by a DDL statement in another concurrent
transaction since the start of this transaction. It is disallowed because the metadata is not versioned. A
concurrent update to metadata can lead to inconsistency if mixed with snapshot isolation."
Long-running queries on read-only replicas
Queries running on read-only replicas need to access metadata for the objects referenced in the query (tables,
indexes, statistics, etc.) In rare cases, if object metadata is modified on the primary replica while a query holds a
lock on the same object on the read-only replica, the query can block the process that applies changes from the
primary replica to the read-only replica. If such a query were to run for a long time, it would cause the read-only
replica to be significantly out of sync with the primary replica. For replicas that are potential failover targets
(secondary replicas in Premium and Business Critical service tiers, Hyperscale HA replicas, and all geo-replicas),
this would also delay database recovery if a failover were to occur, causing longer than expected downtime.
If a long-running query on a read-only replica directly or indirectly causes this kind of blocking, it may be
automatically terminated to avoid excessive data latency and potential database availability impact. The session
will receive error 1219, "Your session has been disconnected because of a high priority DDL operation", or error
3947, "The transaction was aborted because the secondary compute failed to catch up redo. Retry the
transaction."

NOTE
If you receive error 3961, 1219, or 3947 when running queries against a read-only replica, retry the query. Alternatively,
avoid operations that modify object metadata (schema changes, index maintenance, statistics updates, etc.) on the
primary replica while long-running queries execute on secondary replicas.

TIP
In Premium and Business Critical service tiers, when connected to a read-only replica, the redo_queue_size and
redo_rate columns in the sys.dm_database_replica_states DMV may be used to monitor data synchronization process,
serving as indicators of data propagation latency on the read-only replica.

Enable and disable read scale-out


Read scale-out is enabled by default on Premium, Business Critical, and Hyperscale service tiers. Read scale-out
cannot be enabled in Basic, Standard, or General Purpose service tiers. Read scale-out is automatically disabled
on Hyperscale databases configured with zero secondary replicas.
You can disable and re-enable read scale-out on single databases and elastic pool databases in the Premium or
Business Critical service tiers using the following methods.

NOTE
For single databases and elastic pool databases, the ability to disable read scale-out is provided for backward
compatibility. Read scale-out cannot be disabled on Business Critical managed instances.

Azure portal
You can manage the read scale-out setting on the Configure database blade.
PowerShell
IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
The Azure Resource Manager module will continue to receive bug fixes until at least December 2020. The arguments for
the commands in the Az module and in the Azure Resource Manager modules are substantially identical. For more
information about their compatibility, see Introducing the new Azure PowerShell Az module.

Managing read scale-out in Azure PowerShell requires the December 2016 Azure PowerShell release or newer.
For the newest PowerShell release, see Azure PowerShell.
You can disable or re-enable read scale-out in Azure PowerShell by invoking the Set-AzSqlDatabase cmdlet and
passing in the desired value ( Enabled or Disabled ) for the -ReadScale parameter.
To disable read scale-out on an existing database (replacing the items in the angle brackets with the correct
values for your environment and dropping the angle brackets):

Set-AzSqlDatabase -ResourceGroupName <resourceGroupName> -ServerName <serverName> -DatabaseName


<databaseName> -ReadScale Disabled

To disable read scale-out on a new database (replacing the items in the angle brackets with the correct values for
your environment and dropping the angle brackets):

New-AzSqlDatabase -ResourceGroupName <resourceGroupName> -ServerName <serverName> -DatabaseName


<databaseName> -ReadScale Disabled -Edition Premium

To re-enable read scale-out on an existing database (replacing the items in the angle brackets with the correct
values for your environment and dropping the angle brackets):

Set-AzSqlDatabase -ResourceGroupName <resourceGroupName> -ServerName <serverName> -DatabaseName


<databaseName> -ReadScale Enabled

REST API
To create a database with read scale-out disabled, or to change the setting for an existing database, use the
following method with the readScale property set to Enabled or Disabled , as in the following sample request.

Method: PUT
URL:
https://management.azure.com/subscriptions/{SubscriptionId}/resourceGroups/{GroupName}/providers/Microsoft.S
ql/servers/{ServerName}/databases/{DatabaseName}?api-version= 2014-04-01-preview
Body: {
"properties": {
"readScale":"Disabled"
}
}

For more information, see Databases - Create or update.

Using the tempdb database on a read-only replica


The tempdbdatabase on the primary replica is not replicated to the read-only replicas. Each replica has its own
tempdb database that is created when the replica is created. This ensures that tempdb is updateable and can be
modified during your query execution. If your read-only workload depends on using tempdb objects, you
should create these objects as part of the same workload, while connected to a read-only replica.
Using read scale-out with geo-replicated databases
Geo-replicated secondary databases have the same High Availability architecture as primary databases. If you're
connecting to the geo-replicated secondary database with read scale-out enabled, your sessions with
ApplicationIntent=ReadOnly will be routed to one of the high availability replicas in the same way they are
routed on the primary writeable database. The sessions without ApplicationIntent=ReadOnly will be routed to
the primary replica of the geo-replicated secondary, which is also read-only.
In this fashion, creating a geo-replica can provide multiple additional read-only replicas for a read-write primary
database. Each additional geo-replica provides another set of read-only replicas. Geo-replicas can be created in
any Azure region, including the region of the primary database.

NOTE
There is no automatic round-robin or any other load-balanced routing between the replicas of a geo-replicated secondary
database, with the exception of a Hyperscale geo-replica with more than one HA replica. In that case, sessions with read-
only intent are distributed over all HA replicas of a geo-replica.

Feature support on read-only replicas


A list of the behavior of some features on read-only replicas is below:
Auditing on read-only replicas is automatically enabled. For further details about the hierarchy of the storage
folders, naming conventions, and log format, see SQL Database Audit Log Format.
Query Performance Insight relies on data from the Query Store, which currently does not track activity on the
read-only replica. Query Performance Insight will not show queries which execute on the read-only replica.
Automatic tuning relies on the Query Store, as detailed in the Automatic tuning paper. Automatic tuning only
works for workloads running on the primary replica.

Next steps
For information about SQL Database Hyperscale offering, see Hyperscale service tier.
Distributed transactions across cloud databases
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Elastic database transactions for Azure SQL Database and Azure SQL Managed Instance allow you to run
transactions that span several databases. Elastic database transactions are available for .NET applications using
ADO.NET and integrate with the familiar programming experience using the System.Transaction classes. To get
the library, see .NET Framework 4.6.1 (Web Installer). Additionally, for managed instance distributed transactions
are available in Transact-SQL.
On premises, such a scenario usually requires running Microsoft Distributed Transaction Coordinator (MSDTC).
Since MSDTC isn't available for Platform-as-a-Service application in Azure, the ability to coordinate distributed
transactions has now been directly integrated into SQL Database or SQL Managed Instance. Applications can
connect to any database to launch distributed transactions, and one of the databases or servers will
transparently coordinate the distributed transaction, as shown in the following figure.
In this document terms "distributed transactions" and "elastic database transactions" are considered synonyms
and will be used interchangeably.

Common scenarios
Elastic database transactions enable applications to make atomic changes to data stored in several different
databases. Both SQL Database and SQL Managed Instance support client-side development experiences in C#
and .NET. A server-side experience (code written in stored procedures or server-side scripts) using Transact-SQL
is available for SQL Managed Instance only.
IMPORTANT
Running elastic database transactions between Azure SQL Database and Azure SQL Managed Instance is not supported.
Elastic database transaction can only span across a set of databases in SQL Database or a set databases across managed
instances.

Elastic database transactions target the following scenarios:


Multi-database applications in Azure: With this scenario, data is vertically partitioned across several
databases in SQL Database or SQL Managed Instance such that different kinds of data reside on different
databases. Some operations require changes to data, which is kept in two or more databases. The application
uses elastic database transactions to coordinate the changes across databases and ensure atomicity.
Sharded database applications in Azure: With this scenario, the data tier uses the Elastic Database client
library or self-sharding to horizontally partition the data across many databases in SQL Database or SQL
Managed Instance. One prominent use case is the need to perform atomic changes for a sharded multi-
tenant application when changes span tenants. Think for instance of a transfer from one tenant to another,
both residing on different databases. A second case is fine-grained sharding to accommodate capacity needs
for a large tenant, which in turn typically implies that some atomic operations need to stretch across several
databases used for the same tenant. A third case is atomic updates to reference data that are replicated
across databases. Atomic, transacted, operations along these lines can now be coordinated across several
databases. Elastic database transactions use two phase commit to ensure transaction atomicity across
databases. It's a good fit for transactions that involve fewer than 100 databases at a time within a single
transaction. These limits aren't enforced, but one should expect performance and success rates for elastic
database transactions to suffer when exceeding these limits.

Installation and migration


The capabilities for elastic database transactions are provided through updates to the .NET libraries
System.Data.dll and System.Transactions.dll. The DLLs ensure that two-phase commit is used where necessary to
ensure atomicity. To start developing applications using elastic database transactions, install .NET Framework
4.6.1 or a later version. When running on an earlier version of the .NET framework, transactions will fail to
promote to a distributed transaction and an exception will be raised.
After installation, you can use the distributed transaction APIs in System.Transactions with connections to SQL
Database and SQL Managed Instance. If you have existing MSDTC applications using these APIs, rebuild your
existing applications for .NET 4.6 after installing the 4.6.1 Framework. If your projects target .NET 4.6, they'll
automatically use the updated DLLs from the new Framework version and distributed transaction API calls in
combination with connections to SQL Database or SQL Managed Instance will now succeed.
Remember that elastic database transactions don't require installing MSDTC. Instead, elastic database
transactions are directly managed by and within the service. This significantly simplifies cloud scenarios since a
deployment of MSDTC isn't necessary to use distributed transactions with SQL Database or SQL Managed
Instance. Section 4 explains in more detail how to deploy elastic database transactions and the required .NET
framework together with your cloud applications to Azure.

.NET installation for Azure Cloud Services


Azure provides several offerings to host .NET applications. A comparison of the different offerings is available in
Azure App Service, Cloud Services, and Virtual Machines comparison. If the guest OS of the offering is smaller
than .NET 4.6.1 required for elastic transactions, you need to upgrade the guest OS to 4.6.1.
For Azure App Service, upgrades to the guest OS are currently not supported. For Azure Virtual Machines,
simply log into the VM and run the installer for the latest .NET framework. For Azure Cloud Services, you need
to include the installation of a newer .NET version into the startup tasks of your deployment. The concepts and
steps are documented in Install .NET on a Cloud Service Role.
Note that the installer for .NET 4.6.1 may require more temporary storage during the bootstrapping process on
Azure cloud services than the installer for .NET 4.6. To ensure a successful installation, you need to increase
temporary storage for your Azure cloud service in your ServiceDefinition.csdef file in the LocalResources section
and the environment settings of your startup task, as shown in the following sample:

<LocalResources>
...
<LocalStorage name="TEMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
<LocalStorage name="TMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
</LocalResources>
<Startup>
<Task commandLine="install.cmd" executionContext="elevated" taskType="simple">
<Environment>
...
<Variable name="TEMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TEMP']/@path" />
</Variable>
<Variable name="TMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TMP']/@path" />
</Variable>
</Environment>
</Task>
</Startup>

.NET development experience


Multi-database applications
The following sample code uses the familiar programming experience with .NET System.Transactions. The
TransactionScope class establishes an ambient transaction in .NET. (An "ambient transaction" is one that lives in
the current thread.) All connections opened within the TransactionScope participate in the transaction. If
different databases participate, the transaction is automatically elevated to a distributed transaction. The
outcome of the transaction is controlled by setting the scope to complete to indicate a commit.

using (var scope = new TransactionScope())


{
using (var conn1 = new SqlConnection(connStrDb1))
{
conn1.Open();
SqlCommand cmd1 = conn1.CreateCommand();
cmd1.CommandText = string.Format("insert into T1 values(1)");
cmd1.ExecuteNonQuery();
}
using (var conn2 = new SqlConnection(connStrDb2))
{
conn2.Open();
var cmd2 = conn2.CreateCommand();
cmd2.CommandText = string.Format("insert into T2 values(2)");
cmd2.ExecuteNonQuery();
}
scope.Complete();
}

Sharded database applications


Elastic database transactions for SQL Database and SQL Managed Instance also support coordinating
distributed transactions where you use the OpenConnectionForKey method of the elastic database client library
to open connections for a scaled out data tier. Consider cases where you need to guarantee transactional
consistency for changes across several different sharding key values. Connections to the shards hosting the
different sharding key values are brokered using OpenConnectionForKey. In the general case, the connections
can be to different shards such that ensuring transactional guarantees requires a distributed transaction. The
following code sample illustrates this approach. It assumes that a variable called shardmap is used to represent
a shard map from the elastic database client library:

using (var scope = new TransactionScope())


{
using (var conn1 = shardmap.OpenConnectionForKey(tenantId1, credentialsStr))
{
SqlCommand cmd1 = conn1.CreateCommand();
cmd1.CommandText = string.Format("insert into T1 values(1)");
cmd1.ExecuteNonQuery();
}
using (var conn2 = shardmap.OpenConnectionForKey(tenantId2, credentialsStr))
{
var cmd2 = conn2.CreateCommand();
cmd2.CommandText = string.Format("insert into T1 values(2)");
cmd2.ExecuteNonQuery();
}
scope.Complete();
}

Transact-SQL development experience


A server-side distributed transactions using Transact-SQL are available only for Azure SQL Managed Instance.
Distributed transaction can be executed only between Managed Instances that belong to the same Server trust
group. In this scenario, Managed Instances need to use linked server to reference each other.
The following sample Transact-SQL code uses BEGIN DISTRIBUTED TRANSACTION to start distributed
transaction.
-- Configure the Linked Server
-- Add one Azure SQL Managed Instance as Linked Server
EXEC sp_addlinkedserver
@server='RemoteServer', -- Linked server name
@srvproduct='',
@provider='sqlncli', -- SQL Server Native Client
@datasrc='managed-instance-server.46e7afd5bc81.database.windows.net' -- SQL Managed Instance
endpoint

-- Add credentials and options to this Linked Server


EXEC sp_addlinkedsrvlogin
@rmtsrvname = 'RemoteServer', -- Linked server name
@useself = 'false',
@rmtuser = '<login_name>', -- login
@rmtpassword = '<secure_password>' -- password

USE AdventureWorks2012;
GO
SET XACT_ABORT ON;
GO
BEGIN DISTRIBUTED TRANSACTION;
-- Delete candidate from local instance.
DELETE AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
-- Delete candidate from remote instance.
DELETE RemoteServer.AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION;
GO

Combining .NET and Transact-SQL development experience


.NET applications that use System.Transaction classes can combine TransactionScope class with Transact-SQL
statement BEGIN DISTRIBUTED TRANSACTION. Within TransactionScope, inner transaction that executes BEGIN
DISTRIBUTED TRANSACTION will explicitly be promoted to distributed transaction. Also, when second
SqlConnecton is opened within the TransactionScope it will be implicitly promoted to distributed transaction.
Once distributed transaction is started, all subsequent transactions requests, whether they are coming from .NET
or Transact-SQL, will join the parent distributed transaction. As consequence all nested transaction scopes
initiated by BEGIN statement will end up in same transaction and COMMIT/ROLLBACK statements will have
following effect on overall outcome:
COMMIT statement will not have any effect on transaction scope initiated by BEGIN statement, that is, no
results will be committed before Complete() method is invoked on TransactionScope object. If
TransactionScope object is destroyed before being completed, all changes done within the scope are rolled
back.
ROLLBACK statement will cause entire TransactionScope to roll back. Any attempts to enlist new transactions
within TransactionScope will fail afterwards, as well as attempt to invoke Complete() on TransactionScope
object.
Here is an example where transaction is explicitly promoted to distributed transaction with Transact-SQL.
using (TransactionScope s = new TransactionScope())
{
using (SqlConnection conn = new SqlConnection(DB0_ConnectionString)
{
conn.Open();

// Transaction is here promoted to distributed by BEGIN statement


//
Helper.ExecuteNonQueryOnOpenConnection(conn, "BEGIN DISTRIBUTED TRAN");
// ...
}

using (SqlConnection conn2 = new SqlConnection(DB1_ConnectionString)


{
conn2.Open();
// ...
}

s.Complete();
}

Following example shows a transaction that is implicitly promoted to distributed transaction once the second
SqlConnecton was started within the TransactionScope.

using (TransactionScope s = new TransactionScope())


{
using (SqlConnection conn = new SqlConnection(DB0_ConnectionString)
{
conn.Open();
// ...
}

using (SqlConnection conn = new SqlConnection(DB1_ConnectionString)


{
// Because this is second SqlConnection within TransactionScope transaction is here implicitly
promoted distributed.
//
conn.Open();
Helper.ExecuteNonQueryOnOpenConnection(conn, "BEGIN DISTRIBUTED TRAN");
Helper.ExecuteNonQueryOnOpenConnection(conn, lsQuery);
// ...
}

s.Complete();
}

Transactions for SQL Database


Elastic database transactions are supported across different servers in Azure SQL Database. When transactions
cross server boundaries, the participating servers first need to be entered into a mutual communication
relationship. Once the communication relationship has been established, any database in any of the two servers
can participate in elastic transactions with databases from the other server. With transactions spanning more
than two servers, a communication relationship needs to be in place for any pair of servers.
Use the following PowerShell cmdlets to manage cross-server communication relationships for elastic database
transactions:
New-AzSqlSer verCommunicationLink : Use this cmdlet to create a new communication relationship
between two servers in Azure SQL Database. The relationship is symmetric, which means both servers can
initiate transactions with the other server.
Get-AzSqlSer verCommunicationLink : Use this cmdlet to retrieve existing communication relationships
and their properties.
Remove-AzSqlSer verCommunicationLink : Use this cmdlet to remove an existing communication
relationship.

Transactions for SQL Managed Instance


Distributed transactions are supported across databases within multiple instances. When transactions cross
managed instance boundaries, the participating instances need to be in a mutual security and communication
relationship. This is done by creating a Server Trust Group, which can be done by using the Azure portal or
Azure PowerShell or the Azure CLI. If instances are not on the same Virtual network then you must configure
Virtual network peering and Network security group inbound and outbound rules need to allow ports 5024 and
11000-12000 on all participating Virtual networks.

The following diagram shows a Server Trust Group with managed instances that can execute distributed
transactions with .NET or Transact-SQL:
Monitoring transaction status
Use Dynamic Management Views (DMVs) to monitor status and progress of your ongoing elastic database
transactions. All DMVs related to transactions are relevant for distributed transactions in SQL Database and SQL
Managed Instance. You can find the corresponding list of DMVs here: Transaction Related Dynamic Management
Views and Functions (Transact-SQL).
These DMVs are particularly useful:
sys.dm_tran_active_transactions : Lists currently active transactions and their status. The UOW (Unit Of
Work) column can identify the different child transactions that belong to the same distributed transaction. All
transactions within the same distributed transaction carry the same UOW value. For more information, see
the DMV documentation.
sys.dm_tran_database_transactions : Provides additional information about transactions, such as
placement of the transaction in the log. For more information, see the DMV documentation.
sys.dm_tran_locks : Provides information about the locks that are currently held by ongoing transactions.
For more information, see the DMV documentation.

Limitations
The following limitations currently apply to elastic database transactions in SQL Database:
Only transactions across databases in SQL Database are supported. Other X/Open XA resource providers and
databases outside of SQL Database can't participate in elastic database transactions. That means that elastic
database transactions can't stretch across on premises SQL Server and Azure SQL Database. For distributed
transactions on premises, continue to use MSDTC.
Only client-coordinated transactions from a .NET application are supported. Server-side support for T-SQL
such as BEGIN DISTRIBUTED TRANSACTION is planned, but not yet available.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
The following limitations currently apply to distributed transactions in SQL Managed Instance:
Only transactions across databases in managed instances are supported. Other X/Open XA resource
providers and databases outside of Azure SQL Managed Instance can't participate in distributed transactions.
That means that distributed transactions can't stretch across on-premises SQL Server and Azure SQL
Managed Instance. For distributed transactions on premises, continue to use MSDTC.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
Azure SQL Managed Instance must be part of a Server trust group in order to participate in distributed
transaction.
Limitations of Server trust groups affect distributed transactions.
Managed Instances that participate in distributed transactions need to have connectivity over private
endpoints (using private IP address from the virtual network where they are deployed) and need to be
mutually referenced using private FQDNs. Client applications can use distributed transactions on private
endpoints. Additionally, in cases when Transact-SQL leverages linked servers referencing private endpoints,
client applications can use distributed transactions on public endpoints as well. This limitation is explained on
the following diagram.

Next steps
For questions, reach out to us on the Microsoft Q&A question page for SQL Database.
For feature requests, add them to the SQL Database feedback forum or SQL Managed Instance forum.
Maintenance window
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The maintenance window feature allows you to configure maintenance schedule for Azure SQL Database and
Azure SQL Managed Instance resources making impactful maintenance events predictable and less disruptive
for your workload.

NOTE
The maintenance window feature only protects from planned impact from upgrades or scheduled maintenance. It does
not protect from all failover causes; exceptions that may cause short connection interruptions outside of a maintenance
window include hardware failures, cluster load balancing, and database reconfigurations due to events like a change in
database Service Level Objective.

Advance notifications (Preview) are available for databases configured to use a non-default maintenance
window. Advance notifications enable customers to configure notifications to be sent up to 24 hours in advance
of any planned event.

Overview
Azure periodically performs planned maintenance of SQL Database and SQL managed instance resources.
During Azure SQL maintenance event, databases are fully available but can be subject to short reconfigurations
within respective availability SLAs for SQL Database and SQL managed instance.
Maintenance window is intended for production workloads that are not resilient to database or instance
reconfigurations and cannot absorb short connection interruptions caused by planned maintenance events. By
choosing a maintenance window you prefer, you can minimize the impact of planned maintenance as it will be
occurring outside of your peak business hours. Resilient workloads and non-production workloads may rely on
Azure SQL's default maintenance policy.
The maintenance window is free of charge and can be configured on creation or for existing Azure SQL
resources. It can be configured using theAzure portal,PowerShell, CLI, or Azure API.

IMPORTANT
Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the
Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the
end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To
minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.

Gain more predictability with maintenance window


By default, Azure SQL maintenance policy blocks most impactful updates during the period 8AM to 5PM local
time ever y day to avoid any disruptions during typical peak business hours. Local time is determined by the
location of Azure region that hosts the resource and may observe daylight saving time in accordance with local
time zone definition.
You can further adjust the maintenance updates to a time suitable to your Azure SQL resources by choosing
from two additional maintenance window slots:
Weekday window: 10:00 PM to 6:00 AM local time, Monday - Thursday
Weekend window: 10:00 PM to 6:00 AM local time, Friday - Sunday
Maintenance window days listed indicate the starting day of each eight-hour maintenance window. For example,
"10:00 PM to 6:00 AM local time, Monday – Thursday" means that the maintenance windows start at 10:00 PM
local time on each day (Monday through Thursday) and complete at 6:00 AM local time the following day
(Tuesday through Friday).
Once the maintenance window selection is made and service configuration completed, planned maintenance
will occur only during the window of your choice. While maintenance events typically complete within a single
window, some of them may span two or more adjacent windows.

IMPORTANT
In very rare circumstances where any postponement of action could cause serious impact, like applying critical security
patch, configured maintenance window may be temporarily overriden.

Advance notifications
Maintenance notifications can be configured to alert you of upcoming planned maintenance events for your
Azure SQL Database and Azure SQL Managed Instance. The alerts arrive 24 hours in advance, at the time of
maintenance, and when the maintenance is complete. For more information, see Advance Notifications.

Feature availability
Supported subscription types
Configuring and using maintenance window is available for the following offer types: Pay-As-You-Go, Cloud
Solution Provider (CSP), Microsoft Enterprise Agreement, or Microsoft Customer Agreement.
Offers restricted to dev/test usage only are not eligible (like Pay-As-You-Go Dev/Test or Enterprise Dev/Test as
examples).

NOTE
An Azure offer is the type of the Azure subscription you have. For example, a subscription with pay-as-you-go rates,
Azure in Open, and Visual Studio Enterprise are all Azure offers. Each offer or plan has different terms and benefits. Your
offer or plan is shown on the subscription's Overview. For more information on switching your subscription to a different
offer, see Change your Azure subscription to a different offer.

Supported service level objectives


Choosing a maintenance window other than the default is available on all SLOs except for :
Instance pools
Legacy Gen4 vCore
Basic, S0 and S1
DC, Fsv2, M-series
Azure region support
Choosing a maintenance window other than the default is currently available in the following regions:
SQ L DATA B A SE IN A N
A Z URE REGIO N SQ L M A N A GED IN STA N C E SQ L DATA B A SE A Z URE AVA IL A B IL IT Y Z O N E

Australia Central 1 Yes

Australia Central 2 Yes

Australia East Yes Yes Yes

Australia Southeast Yes Yes

Brazil South Yes Yes

Brazil Southeast Yes Yes

Canada Central Yes Yes Yes

Canada East Yes Yes

Central India Yes Yes

Central US Yes Yes Yes

China East 2 Yes Yes

China North 2 Yes Yes

East US Yes Yes Yes

East US 2 Yes Yes Yes

East Asia Yes Yes

France Central Yes Yes

France South Yes Yes

Germany West Central Yes Yes

Germany North Yes

Japan East Yes Yes Yes

Japan West Yes Yes

Korea Central Yes

Korea South Yes

North Central US Yes Yes

North Europe Yes Yes Yes


SQ L DATA B A SE IN A N
A Z URE REGIO N SQ L M A N A GED IN STA N C E SQ L DATA B A SE A Z URE AVA IL A B IL IT Y Z O N E

Norway East Yes

Norway West Yes

South Africa North Yes

South Africa West Yes

South Central US Yes Yes Yes

South India Yes Yes

Southeast Asia Yes Yes Yes

Switzerland North Yes Yes

Switzerland West Yes

UAE Central Yes

UAE North Yes Yes

UK South Yes Yes Yes

UK West Yes Yes

US Gov Arizona Yes

US Gov Texas Yes Yes

US Gov Virginia Yes Yes

West Central US Yes Yes

West Europe Yes Yes Yes

West India Yes

West US Yes Yes

West US 2 Yes Yes Yes

West US 3 Yes

Gateway maintenance
To get the maximum benefit from maintenance windows, make sure your client applications are using the
redirect connection policy. Redirect is the recommended connection policy, where clients establish connections
directly to the node hosting the database, leading to reduced latency and improved throughput.
In Azure SQL Database, any connections using the proxy connection policy could be affected by both the
chosen maintenance window and a gateway node maintenance window. However, client connections
using the recommended redirect connection policy are unaffected by a gateway node maintenance
reconfiguration.
In Azure SQL Managed Instance, the gateway nodes are hosted within the virtual cluster and have the
same maintenance window as the managed instance, but using the redirect connection policy is still
recommended to minimize number of disruptions during the maintenance event.
For more on the client connection policy in Azure SQL Database, see Azure SQL Database Connection policy.
For more on the client connection policy in Azure SQL Managed Instance, see Azure SQL Managed Instance
connection types.

Considerations for Azure SQL Managed Instance


Azure SQL Managed Instance consists of service components hosted on a dedicated set of isolated virtual
machines that run inside the customer's virtual network subnet. These virtual machines form virtual cluster(s)
that can host multiple managed instances. Maintenance window configured on instances of one subnet can
influence the number of virtual clusters within the subnet, distribution of instances among virtual clusters, and
virtual cluster management operations. This may require a consideration of few effects.
Maintenance window configuration is a long running operation
All instances hosted in a virtual cluster share the maintenance window. By default, all managed instances are
hosted in the virtual cluster with the default maintenance window. Specifying another maintenance window for
managed instance during its creation or afterwards means that it must be placed in virtual cluster with
corresponding maintenance window. If there is no such virtual cluster in the subnet, a new one must be created
first to accommodate the instance. Accommodating additional instance in the existing virtual cluster may
require cluster resize. Both operations contribute to the duration of configuring maintenance window for a
managed instance. Expected duration of configuring maintenance window on managed instance can be
calculated using estimated duration of instance management operations.

IMPORTANT
A short reconfiguration happens at the end of the operation of configuring maintenance window and typically lasts up to
8 seconds even in case of interrupted long-running transactions. To minimize the impact of the reconfiguration, initiate
the operation outside of the peak hours.

IP address space requirements


Each new virtual cluster in subnet requires additional IP addresses according to the virtual cluster IP address
allocation. Changing maintenance window for existing managed instance also requires temporary additional IP
capacity as in scaling vCores scenario for corresponding service tier.
IP address change
Configuring and changing maintenance window causes change of the IP address of the instance, within the IP
address range of the subnet.

IMPORTANT
Make sure that NSG and firewall rules won't block data traffic after IP address change.

Serialization of virtual cluster management operations


Operations affecting the virtual cluster, like service upgrades and virtual cluster resize (adding new or removing
unneeded compute nodes) are serialized. In other words, a new virtual cluster management operation cannot
start until the previous one is completed. In case that maintenance window closes before the ongoing service
upgrade or maintenance operation is completed, any other virtual cluster management operations submitted in
the meantime will be put on hold until next maintenance window opens and service upgrade or maintenance
operation completes. It is not common for a maintenance operation to take longer than a single window per
virtual cluster, but it can happen in case of very complex maintenance operations.
The serialization of virtual cluster management operations is general behavior that applies to the default
maintenance policy as well. With a maintenance window schedule configured, the period between two adjacent
windows can be few days long. Submitted operations can also be on hold for few days if the maintenance
operation spans two windows. That is very rare case, but creation of new instances or resize of the existing
instances (if additional compute nodes are needed) may be blocked during this period.

Retrieving list of maintenance events


Azure Resource Graph is an Azure service designed to extend Azure Resource Management. The Azure Resource
Graph Explorer provides efficient and performant resource exploration with the ability to query at scale across a
given set of subscriptions so that you can effectively govern your environment.
You can use the Azure Resource Graph Explorer to query for maintenance events. For an introduction on how to
run these queries, see Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer.
To check for the maintenance events for all SQL databases in your subscription, use the following sample query
in Azure Resource Graph Explorer:

servicehealthresources
| where type =~ 'Microsoft.ResourceHealth/events'
| extend impact = properties.Impact
| extend impactedService = parse_json(impact[0]).ImpactedService
| where impactedService =~ 'SQL Database'
| extend eventType = properties.EventType, status = properties.Status, description = properties.Title,
trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority,
impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime =
todatetime(tolong(properties.ImpactMitigationTime))
| where eventType == 'PlannedMaintenance'
| order by impactStartTime desc

To check for the maintenance events for all managed instances in your subscription, use the following sample
query in Azure Resource Graph Explorer:

servicehealthresources
| where type =~ 'Microsoft.ResourceHealth/events'
| extend impact = properties.Impact
| extend impactedService = parse_json(impact[0]).ImpactedService
| where impactedService =~ 'SQL Managed Instance'
| extend eventType = properties.EventType, status = properties.Status, description = properties.Title,
trackingId = properties.TrackingId, summary = properties.Summary, priority = properties.Priority,
impactStartTime = todatetime(tolong(properties.ImpactStartTime)), impactMitigationTime =
todatetime(tolong(properties.ImpactMitigationTime))
| where eventType == 'PlannedMaintenance'
| order by impactStartTime desc

For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit
Azure Resource Graph sample queries for Azure Service Health.

Next steps
Configure maintenance window
Advance notifications

Learn more
Maintenance window FAQ
Azure SQL Database
Azure SQL Managed Instance
Plan for Azure maintenance events in Azure SQL Database and Azure SQL Managed Instance
Configure maintenance window
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Configure the maintenance window for an Azure SQL database, elastic pool, or Azure SQL Managed Instance
database during resource creation, or anytime after a resource is created.
The System default maintenance window is 5PM to 8AM daily (local time of the Azure region the resource is
located) to avoid peak business hours interruptions. If the System default maintenance window is not the best
time, select one of the other available maintenance windows.
The ability to change to a different maintenance window is not available for every service level or in every
region. For details on feature availability, see Maintenance window availability.

IMPORTANT
Configuring maintenance window is a long running asynchronous operation, similar to changing the service tier of the
Azure SQL resource. The resource is available during the operation, except a short reconfiguration that happens at the
end of the operation and typically lasts up to 8 seconds even in case of interrupted long-running transactions. To
minimize the impact of the reconfiguration you should perform the operation outside of the peak hours.

Configure maintenance window during database creation


Portal
PowerShell
CLI

To configure the maintenance window when you create a database, elastic pool, or managed instance, set the
desired Maintenance window on the Additional settings page.
Set the maintenance window while creating a single database or elastic pool
For step-by-step information on creating a new database or pool, see Create an Azure SQL Database single
database.
Set the maintenance window while creating a managed instance
For step-by-step information on creating a new managed instance, see Create an Azure SQL Managed Instance.
Configure maintenance window for existing databases
When applying a maintenance window selection to a database, a brief reconfiguration (several seconds) may be
experienced in some cases as Azure applies the required changes.

Portal
PowerShell
CLI

The following steps set the maintenance window on an existing database, elastic pool, or managed instance
using the Azure portal:
Set the maintenance window for an existing database or elastic pool
1. Navigate to the SQL database or elastic pool you want to set the maintenance window for.
2. In the Settings menu select Maintenance , then select the desired maintenance window.

Set the maintenance window for an existing managed instance


1. Navigate to the managed instance you want to set the maintenance window for.
2. In the Settings menu select Maintenance , then select the desired maintenance window.

Cleanup resources
Be sure to delete unneeded resources after you're finished with them to avoid unnecessary charges.
Portal
PowerShell
CLI

1. Navigate to the SQL database or elastic pool you no longer need.


2. On the Over view menu, select the option to delete the resource.

Next steps
To learn more about maintenance window, see Maintenance window.
For more information, see Maintenance window FAQ.
To learn about optimizing performance, see Monitoring and performance tuning in Azure SQL Database and
Azure SQL Managed Instance.
Advance notifications for planned maintenance
events (Preview)
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Advance notifications (Preview) are available for databases configured to use a non-default maintenance
window and managed instances with any configuration (including the default one). Advance notifications enable
customers to configure notifications to be sent up to 24 hours in advance of any planned event.
Notifications can be configured so you can get texts, emails, Azure push notifications, and voicemails when
planned maintenance is due to begin in the next 24 hours. Additional notifications are sent when maintenance
begins and when maintenance ends.

IMPORTANT
For Azure SQL Database, advance notifications cannot be configured for the System default maintenance window
option. Choose a maintenance window other than the System default to configure and enable Advance notifications.

NOTE
While maintenance windows are generally available, advance notifications for maintenance windows are in public preview
for Azure SQL Database and Azure SQL Managed Instance.

Create an advance notification


Advance notifications are available for Azure SQL databases that have their maintenance window configured.
Complete the following steps to enable a notification.
1. Go to the Planned maintenance page, select Health aler ts , then Add ser vice health aler t .
2. In the Actions section, select Add action groups .

3. Complete the Create action group form, then select Next: Notifications .
4. On the Notifications tab, select the Notification type . The Email/SMS message/Push/Voice option
offers the most flexibility and is the recommended option. Select the pen to configure the notification.

a. Complete the Add or edit notification form that opens and select OK :
b. Actions and Tags are optional. Here you can configure additional actions to be triggered or use
tags to categorize and organize your Azure resources.
c. Check the details on the Review + create tab and select Create .
5. After selecting create, the alert rule configuration screen opens and the action group will be selected. Give
a name to your new alert rule, then choose the resource group for it, and select Create aler t rule .
6. Click the Health aler ts menu item again, and the list of alerts now contains your new alert.
You're all set. Next time there's a planned Azure SQL maintenance event, you'll receive an advance notification.

Receiving notifications
The following table shows the general-information notifications you may receive:

STAT US DESC RIP T IO N

Planned Deployment Received 24 hours prior to the maintenance event.


Maintenance is planned on DATE between 5pm - 8am (local
time) for DBxyz.

In-Progress Maintenance for databasexyzis starting.

Complete Maintenance of database xyz is complete.

The following table shows additional notifications that may be sent while maintenance is ongoing:

STAT US DESC RIP T IO N

Extended Maintenance is in progress but didn't complete for database


xyz. Maintenance will continue at the next maintenance
window.

Canceled Maintenance for database xyz is canceled and will be


rescheduled later.

Blocked There was a problem during maintenance for database xyz.


We'll notify you when we resume.

Resumed The problem has been resolved and maintenance will


continue at the next maintenance window.

Permissions
While Advance Notifications can be sent to any email address, Azure subscription RBAC (role-based access
control) policy determines who can access the links in the email. Querying resource graph is covered by Azure
RBAC access management. To enable read access, each recipient should have resource group level read access.
For more information, see Steps to assign an Azure role.

Retrieve the list of impacted resources


Azure Resource Graph is an Azure service designed to extend Azure Resource Management. The Azure Resource
Graph Explorer provides efficient and performant resource exploration with the ability to query at scale across a
given set of subscriptions so that you can effectively govern your environment.
You can use the Azure Resource Graph Explorer to query for maintenance events. For an introduction on how to
run these queries, see Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer.
When the advanced notification for planned maintenance is received, you will get a link that opens Azure
Resource Graph and executes the query for the exact event, similar to the following. Note that the
notificationId value is unique per maintenance event.
resources
| project resource = tolower(id)
| join kind=inner (
maintenanceresources
| where type == "microsoft.maintenance/updates"
| extend p = parse_json(properties)
| mvexpand d = p.value
| where d has 'notificationId' and d.notificationId == 'LNPN-R9Z'
| project resource = tolower(name), status = d.status, resourceGroup, location, startTimeUtc =
d.startTimeUtc, endTimeUtc = d.endTimeUtc, impactType = d.impactType
) on resource
| project resource, status, resourceGroup, location, startTimeUtc, endTimeUtc, impactType

For the full reference of the sample queries and how to use them across tools like PowerShell or Azure CLI, visit
Azure Resource Graph sample queries for Azure Service Health.

Next steps
Maintenance window
Maintenance window FAQ
Overview of alerts in Microsoft Azure
Email Azure Resource Manager Role
An overview of Azure SQL Database and SQL
Managed Instance security capabilities
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article outlines the basics of securing the data tier of an application using Azure SQL Database, Azure SQL
Managed Instance, and Azure Synapse Analytics. The security strategy described follows the layered defense-in-
depth approach as shown in the picture below, and moves from the outside in:

Network security
Microsoft Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics provide a relational
database service for cloud and enterprise applications. To help protect customer data, firewalls prevent network
access to the server until access is explicitly granted based on IP address or Azure Virtual network traffic origin.
IP firewall rules
IP firewall rules grant access to databases based on the originating IP address of each request. For more
information, see Overview of Azure SQL Database and Azure Synapse Analytics firewall rules.
Virtual network firewall rules
Virtual network service endpoints extend your virtual network connectivity over the Azure backbone and enable
Azure SQL Database to identify the virtual network subnet that traffic originates from. To allow traffic to reach
Azure SQL Database, use the SQL service tags to allow outbound traffic through Network Security Groups.
Virtual network rules enable Azure SQL Database to only accept communications that are sent from selected
subnets inside a virtual network.
NOTE
Controlling access with firewall rules does not apply to SQL Managed Instance . For more information about the
networking configuration needed, see Connecting to a managed instance

Access management
IMPORTANT
Managing databases and servers within Azure is controlled by your portal user account's role assignments. For more
information on this article, see Azure role-based access control in the Azure portal.

Authentication
Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL
Managed Instance support SQL authentication and Azure AD authentication. SQL Managed instance additionally
supports Windows Authentication for Azure AD principals.
SQL authentication :
SQL authentication refers to the authentication of a user when connecting to Azure SQL Database or
Azure SQL Managed Instance using username and password. A ser ver admin login with a username
and password must be specified when the server is being created. Using these credentials, a ser ver
admin can authenticate to any database on that server or instance as the database owner. After that,
additional SQL logins and users can be created by the server admin, which enable users to connect using
username and password.
Azure Active Director y authentication :
Azure Active Directory authentication is a mechanism of connecting to Azure SQL Database, Azure SQL
Managed Instance and Azure Synapse Analytics by using identities in Azure Active Directory (Azure AD).
Azure AD authentication allows administrators to centrally manage the identities and permissions of
database users along with other Azure services in one central location. This includes the minimization of
password storage and enables centralized password rotation policies.
A server admin called the Active Director y administrator must be created to use Azure AD
authentication with SQL Database. For more information, see Connecting to SQL Database By Using
Azure Active Directory Authentication. Azure AD authentication supports both managed and federated
accounts. The federated accounts support Windows users and groups for a customer domain federated
with Azure AD.
Additional Azure AD authentication options available are Active Directory Universal Authentication for
SQL Server Management Studio connections including multi-factor authentication and Conditional
Access.
Windows Authentication for Azure AD Principals (Preview) :
Kerberos authentication for Azure AD Principals (Preview) enables Windows Authentication for Azure
SQL Managed Instance. Windows Authentication for managed instances empowers customers to move
existing services to the cloud while maintaining a seamless user experience and provides the basis for
infrastructure modernization.
To enable Windows Authentication for Azure Active Directory (Azure AD) principals, you will turn your
Azure AD tenant into an independent Kerberos realm and create an incoming trust in the customer
domain. Learn how Windows Authentication for Azure SQL Managed Instance is implemented with Azure
Active Directory and Kerberos.
IMPORTANT
Managing databases and servers within Azure is controlled by your portal user account's role assignments. For more
information on this article, see Azure role-based access control in Azure portal. Controlling access with firewall rules does
not apply to SQL Managed Instance. Please see the following article on connecting to a managed instance for more
information about the networking configuration needed.

Authorization
Authorization refers to controlling access on resources and commands within a database. This is done by
assigning permissions to a user within a database in Azure SQL Database or Azure SQL Managed Instance.
Permissions are ideally managed by adding user accounts to database roles and assigning database-level
permissions to those roles. Alternatively an individual user can also be granted certain object-level permissions.
For more information, see Logins and users
As a best practice, create custom roles when needed. Add users to the role with the least privileges required to
do their job function. Do not assign permissions directly to users. The server admin account is a member of the
built-in db_owner role, which has extensive permissions and should only be granted to few users with
administrative duties. To further limit the scope of what a user can do, the EXECUTE AS can be used to specify
the execution context of the called module. Following these best practices is also a fundamental step towards
Separation of Duties.
Row-level security
Row-Level Security enables customers to control access to rows in a database table based on the characteristics
of the user executing a query (for example, group membership or execution context). Row-Level Security can
also be used to implement custom Label-based security concepts. For more information, see Row-Level security.

Threat protection
SQL Database and SQL Managed Instance secure customer data by providing auditing and threat detection
capabilities.
SQL auditing in Azure Monitor logs and Event Hubs
SQL Database and SQL Managed Instance auditing tracks database activities and helps maintain compliance
with security standards by recording database events to an audit log in a customer-owned Azure storage
account. Auditing allows users to monitor ongoing database activities, as well as analyze and investigate
historical activity to identify potential threats or suspected abuse and security violations. For more information,
see Get started with SQL Database Auditing.
Advanced Threat Protection
Advanced Threat Protection is analyzing your logs to detect unusual behavior and potentially harmful attempts
to access or exploit databases. Alerts are created for suspicious activities such as SQL injection, potential data
infiltration, and brute force attacks or for anomalies in access patterns to catch privilege escalations and
breached credentials use. Alerts are viewed from the Microsoft Defender for Cloud, where the details of the
suspicious activities are provided and recommendations for further investigation given along with actions to
mitigate the threat. Advanced Threat Protection can be enabled per server for an additional fee. For more
information, see Get started with SQL Database Advanced Threat Protection.

Information protection and encryption


Transport Layer Security (Encryption-in-transit)
SQL Database, SQL Managed Instance, and Azure Synapse Analytics secure customer data by encrypting data in
motion with Transport Layer Security (TLS).
SQL Database, SQL Managed Instance, and Azure Synapse Analytics enforce encryption (SSL/TLS) at all times
for all connections. This ensures all data is encrypted "in transit" between the client and server irrespective of
the setting of Encr ypt or TrustSer verCer tificate in the connection string.
As a best practice, recommend that in the connection string used by the application, you specify an encrypted
connection and not trust the server certificate. This forces your application to verify the server certificate and
thus prevents your application from being vulnerable to man in the middle type attacks.
For example when using the ADO.NET driver this is accomplished via Encr ypt=True and
TrustSer verCer tificate=False . If you obtain your connection string from the Azure portal, it will have the
correct settings.

IMPORTANT
Note that some non-Microsoft drivers may not use TLS by default or rely on an older version of TLS (<1.2) in order to
function. In this case the server still allows you to connect to your database. However, we recommend that you evaluate
the security risks of allowing such drivers and application to connect to SQL Database, especially if you store sensitive
data.
For further information about TLS and connectivity, see TLS considerations

Transparent Data Encryption (Encryption-at-rest)


Transparent data encryption (TDE) for SQL Database, SQL Managed Instance, and Azure Synapse Analytics adds
a layer of security to help protect data at rest from unauthorized or offline access to raw files or backups.
Common scenarios include data center theft or unsecured disposal of hardware or media such as disk drives
and backup tapes.TDE encrypts the entire database using an AES encryption algorithm, which doesn't require
application developers to make any changes to existing applications.
In Azure, all newly created databases are encrypted by default and the database encryption key is protected by a
built-in server certificate. Certificate maintenance and rotation are managed by the service and require no input
from the user. Customers who prefer to take control of the encryption keys can manage the keys in Azure Key
Vault.
Key management with Azure Key Vault
Bring Your Own Key (BYOK) support forTransparent Data Encryption (TDE)allows customers to take ownership
of key management and rotation usingAzure Key Vault, Azure's cloud-based external key management system. If
the database's access to the key vault is revoked, a database cannot be decrypted and read into memory. Azure
Key Vault provides a central key management platform, leverages tightly monitored hardware security modules
(HSMs), and enables separation of duties between management of keys and data to help meet security
compliance requirements.
Always Encrypted (Encryption-in-use )

Always Encrypted is a feature designed to protect sensitive data stored in specific database columns from access
(for example, credit card numbers, national identification numbers, or data on a need to know basis). This
includes database administrators or other privileged users who are authorized to access the database to
perform management tasks, but have no business need to access the particular data in the encrypted columns.
The data is always encrypted, which means the encrypted data is decrypted only for processing by client
applications with access to the encryption key. The encryption key is never exposed to SQL Database or SQL
Managed Instance and can be stored either in the Windows Certificate Store or in Azure Key Vault.
Dynamic data masking

Dynamic data masking limits sensitive data exposure by masking it to non-privileged users. Dynamic data
masking automatically discovers potentially sensitive data in Azure SQL Database and SQL Managed Instance
and provides actionable recommendations to mask these fields, with minimal impact to the application layer. It
works by obfuscating the sensitive data in the result set of a query over designated database fields, while the
data in the database is not changed. For more information, see Get started with SQL Database and SQL
Managed Instance dynamic data masking.

Security management
Vulnerability assessment
Vulnerability assessment is an easy to configure service that can discover, track, and help remediate potential
database vulnerabilities with the goal to proactively improve overall database security. Vulnerability assessment
(VA) is part of the Microsoft Defender for SQL offering, which is a unified package for advanced SQL security
capabilities. Vulnerability assessment can be accessed and managed via the central Microsoft Defender for SQL
portal.
Data discovery and classification
Data discovery and classification (currently in preview) provides basic capabilities built into Azure SQL Database
and SQL Managed Instance for discovering, classifying and labeling the sensitive data in your databases.
Discovering and classifying your utmost sensitive data (business/financial, healthcare, personal data, etc.) can
play a pivotal role in your organizational Information protection stature. It can serve as infrastructure for:
Various security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data.
Controlling access to, and hardening the security of, databases containing highly sensitive data.
Helping meet data privacy standards and regulatory compliance requirements.
For more information, see Get started with data discovery and classification.
Compliance
In addition to the above features and functionality that can help your application meet various security
requirements, Azure SQL Database also participates in regular audits, and has been certified against a number
of compliance standards. For more information, see the Microsoft Azure Trust Center where you can find the
most current list of SQL Database compliance certifications.

Next steps
For a discussion of the use of logins, user accounts, database roles, and permissions in SQL Database and
SQL Managed Instance, see Manage logins and user accounts.
For a discussion of database auditing, see auditing.
For a discussion of threat detection, see threat detection.
Playbook for addressing common security
requirements with Azure SQL Database and Azure
SQL Managed Instance
7/12/2022 • 39 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article provides best practices on how to solve common security requirements. Not all requirements are
applicable to all environments, and you should consult your database and security team on which features to
implement.

Solving common security requirements


This document provides guidance on how to solve common security requirements for new or existing
applications using Azure SQL Database and Azure SQL Managed Instance. It's organized by high-level security
areas. For addressing specific threats, refer to the Common security threats and potential mitigations section.
Although some of the presented recommendations are applicable when migrating applications from on-
premises to Azure, migration scenarios are not the focus of this document.
Azure SQL Database deployment offers covered in this guide
Azure SQL Database: single databases and elastic pools in servers
Azure SQL Managed Instance
Deployment offers not covered in this guide
Azure Synapse Analytics
Azure SQL VMs (IaaS)
SQL Server
Audience
The intended audiences for this guide are customers facing questions on how to secure Azure SQL Database.
The roles interested in this best practice article include, but not limited to:
Security Architects
Security Managers
Compliance Officers
Privacy Officers
Security Engineers
Using this guide
This document is intended as a companion to our existing Azure SQL Database security documentation.
Unless otherwise stated, we recommend you follow all best practices listed in each section to achieve the
respective goal or requirement. To meet specific security compliance standards or best practices, important
regulatory compliance controls are listed under the Requirements or Goals section wherever applicable. These
are the security standards and regulations that are referenced in this paper:
FedRAMP: AC-04, AC-06
SOC: CM-3, SDL-3
ISO/IEC 27001: Access Control, Cryptography
Microsoft Operational Security Assurance (OSA) practices: Practice #1-6 and #9
NIST Special Publication 800-53 Security Controls: AC-5, AC-6
PCI DSS: 6.3.2, 6.4.2
We plan on continuing to update the recommendations and best practices listed here. Provide input or any
corrections for this document using the Feedback link at the bottom of this article.

Authentication
Authentication is the process of proving the user is who they claim to be. Azure SQL Database and SQL
Managed Instance support two types of authentication:
SQL authentication
Azure Active Directory authentication

NOTE
Azure Active Directory authentication may not be supported for all tools and 3rd party applications.

Central management for identities


Central identity management offers the following benefits:
Manage group accounts and control user permissions without duplicating logins across servers, databases
and managed instances.
Simplified and flexible permission management.
Management of applications at scale.
How to implement :
Use Azure Active Directory (Azure AD) authentication for centralized identity management.
Best practices :
Create an Azure AD tenant and create users to represent human users and create service principals to
represent apps, services, and automation tools. Service principals are equivalent to service accounts in
Windows and Linux.
Assign access rights to resources to Azure AD principals via group assignment: Create Azure AD groups,
grant access to groups, and add individual members to the groups. In your database, create contained
database users that map your Azure AD groups. To assign permissions inside the database, put the users
that are associated with your Azure AD groups in database roles with the appropriate permissions.
See the articles, Configure and manage Azure Active Directory authentication with SQL and Use Azure
AD for authentication with SQL.

NOTE
In SQL Managed Instance, you can also create logins that map to Azure AD principals in the master database. See
CREATE LOGIN (Transact-SQL).

Using Azure AD groups simplifies permission management and both the group owner, and the resource
owner can add/remove members to/from the group.
Create a separate group for Azure AD administrators for each server or managed instance.
See the article, Provision an Azure Active Directory administrator for your server.
Monitor Azure AD group membership changes using Azure AD audit activity reports.
For a managed instance, a separate step is required to create an Azure AD admin.
See the article, Provision an Azure Active Directory administrator for your managed instance.

NOTE
Azure AD authentication is recorded in Azure SQL audit logs, but not in Azure AD sign-in logs.
Azure RBAC permissions granted in Azure do not apply to Azure SQL Database or SQL Managed Instance permissions.
Such permissions must be created/mapped manually using existing SQL permissions.
On the client-side, Azure AD authentication needs access to the internet or via User Defined Route (UDR) to a virtual
network.
The Azure AD access token is cached on the client side and its lifetime depends on token configuration. See the article,
Configurable token lifetimes in Azure Active Directory
For guidance on troubleshooting Azure AD Authentication issues, see the following blog: Troubleshooting Azure AD.

Azure AD Multi-Factor Authentication


Mentioned in: OSA Practice #2, ISO Access Control (AC)

Azure AD Multi-Factor Authentication helps provides additional security by requiring more than one form of
authentication.
How to implement :
Enable Multi-Factor Authentication in Azure AD using Conditional Access and use interactive
authentication.
The alternative is to enable Multi-Factor Authentication for the entire Azure AD or AD domain.
Best practices :
Activate Conditional Access in Azure AD (requires Premium subscription).
See the article, Conditional Access in Azure AD.
Create Azure AD group(s) and enable Multi-Factor Authentication policy for selected groups using Azure
AD Conditional Access.
See the article, Plan Conditional Access Deployment.
Multi-Factor Authentication can be enabled for the entire Azure AD or for the whole Active Directory
federated with Azure AD.
Use Azure AD Interactive authentication mode for Azure SQL Database and Azure SQL Managed Instance
where a password is requested interactively, followed by Multi-Factor Authentication:
Use Universal Authentication in SSMS. See the article, Using Multi-factor Azure AD authentication with
Azure SQL Database, SQL Managed Instance, Azure Synapse (SSMS support for Multi-Factor
Authentication).
Use Interactive Authentication supported in SQL Server Data Tools (SSDT). See the article, Azure Active
Directory support in SQL Server Data Tools (SSDT).
Use other SQL tools supporting Multi-Factor Authentication.
SSMS Wizard support for export/extract/deploy database
sqlpackage.exe: option '/ua'
sqlcmd Utility: option -G (interactive)
bcp Utility: option -G (interactive)
Implement your applications to connect to Azure SQL Database or Azure SQL Managed Instance using
interactive authentication with Multi-Factor Authentication support.
See the article, Connect to Azure SQL Database with Azure AD Multi-Factor Authentication.

NOTE
This authentication mode requires user-based identities. In cases where a trusted identity model is used that is
bypassing individual Azure AD user authentication (e.g. using managed identity for Azure resources), Multi-Factor
Authentication does not apply.

Minimize the use of password-based authentication for users


Mentioned in: OSA Practice #4, ISO Access Control (AC)

Password-based authentication methods are a weaker form of authentication. Credentials can be compromised
or mistakenly given away.
How to implement :
Use an Azure AD integrated authentication that eliminates the use of passwords.
Best practices :
Use single sign-on authentication using Windows credentials. Federate the on-premises AD domain with
Azure AD and use integrated Windows authentication (for domain-joined machines with Azure AD).
See the article, SSMS support for Azure AD Integrated authentication.
Minimize the use of password-based authentication for applications
Mentioned in: OSA Practice #4, ISO Access Control (AC)

How to implement :
Enable Azure Managed Identity. You can also use integrated or certificate-based authentication.
Best practices :
Use managed identities for Azure resources.
System-assigned managed identity
User-assigned managed identity
Use Azure SQL Database from Azure App Service with managed identity (without code changes)
Use cert-based authentication for an application.
See this code sample.
Use Azure AD authentication for integrated federated domain and domain-joined machine (see section
above).
See the sample application for integrated authentication.
Protect passwords and secrets
For cases when passwords aren't avoidable, make sure they're secured.
How to implement :
Use Azure Key Vault to store passwords and secrets. Whenever applicable, use Multi-Factor Authentication
for Azure SQL Database with Azure AD users.
Best practices :
If avoiding passwords or secrets aren't possible, store user passwords and application secrets in Azure
Key Vault and manage access through Key Vault access policies.
Various app development frameworks may also offer framework-specific mechanisms for protecting
secrets in the app. For example: ASP.NET core app.
Use SQL authentication for legacy applications
SQL authentication refers to the authentication of a user when connecting to Azure SQL Database or SQL
Managed Instance using username and password. A login will need to be created in each server or managed
instance, and a user created in each database.
How to implement :
Use SQL authentication.
Best practices :
As a server or instance admin, create logins and users. Unless using contained database users with
passwords, all passwords are stored in master database.
See the article, Controlling and granting database access to SQL Database, SQL Managed Instance and
Azure Synapse Analytics.

Access management
Access management (also called Authorization) is the process of controlling and managing authorized users'
access and privileges to Azure SQL Database or SQL Managed Instance.
Implement principle of least privilege
Mentioned in: FedRamp controls AC-06, NIST: AC-6, OSA Practice #3

The principle of least privilege states that users shouldn't have more privileges than needed to complete their
tasks. For more information, see the article Just enough administration.
How to implement :
Assign only the necessary permissions to complete the required tasks:
In SQL Databases:
Use granular permissions and user-defined database roles (or server-roles in SQL Managed Instance):
1. Create the required roles
CREATE ROLE
CREATE SERVER ROLE
2. Create required users
CREATE USER
3. Add users as members to roles
ALTER ROLE
ALTER SERVER ROLE
4. Then assign permissions to roles.
GRANT
Make sure to not assign users to unnecessary roles.
In Azure Resource Manager:
Use built-in roles if available or Azure custom roles and assign the necessary permissions.
Azure built-in roles
Azure custom roles
Best practices :
The following best practices are optional but will result in better manageability and supportability of your
security strategy:
If possible, start with the least possible set of permissions and start adding permissions one by one if
there's a real necessity (and justification) – as opposed to the opposite approach: taking permissions away
step by step.
Refrain from assigning permissions to individual users. Use roles (database or server roles) consistently
instead. Roles helps greatly with reporting and troubleshooting permissions. (Azure RBAC only supports
permission assignment via roles.)
Create and use custom roles with the exact permissions needed. Typical roles that are used in practice:
Security deployment
Administrator
Developer
Support personnel
Auditor
Automated processes
End user
Use built-in roles only when the permissions of the roles match exactly the needed permissions for the
user. You can assign users to multiple roles.
Remember that permissions in the database engine can be applied within the following scopes (the
smaller the scope, the smaller the impact of the granted permissions):
Server (special roles in master database) in Azure
Database
Schema
It is a best practice to use schemas to grant permissions inside a database. (also see: Schema-
design: Recommendations for Schema design with security in mind)
Object (table, view, procedure, etc.)

NOTE
It is not recommended to apply permissions on the object level because this level adds unnecessary complexity to
the overall implementation. If you decide to use object-level permissions, those should be clearly documented. The
same applies to column-level-permissions, which are even less recommendable for the same reasons. Also be
aware that by default a table-level DENY does not override a column-level GRANT. This would require the
common criteria compliance Server Configuration to be activated.

Perform regular checks using Vulnerability Assessment (VA) to test for too many permissions.
Implement Separation of Duties
Mentioned in: FedRamp: AC-04, NIST: AC-5, ISO: A.6.1.2, PCI 6.4.2, SOC: CM-3, SDL-3

Separation of Duties, also called Segregation of Duties describes the requirement to split sensitive tasks into
multiple tasks that are assigned to different users. Separation of Duties helps prevent data breaches.
How to implement :
Identify the required level of Separation of Duties. Examples:
Between Development/Test and Production environments
Security-wise sensitive tasks vs Database Administrator (DBA) management level tasks vs developer
tasks.
Examples: Auditor, creation of security policy for Role-level Security (RLS), Implementing SQL
Database objects with DDL-permissions.
Identify a comprehensive hierarchy of users (and automated processes) that access the system.
Create roles according to the needed user-groups and assign permissions to roles.
For management-level tasks in Azure portal or via PowerShell-automation use Azure roles. Either find
a built-in role matching the requirement, or create an Azure custom role using the available
permissions
Create Server roles for server-wide tasks (creating new logins, databases) in a managed instance.
Create Database Roles for database-level tasks.
For certain sensitive tasks, consider creating special stored procedures signed by a certificate to execute
the tasks on behalf of the users. One important advantage of digitally signed stored procedures is that if
the procedure is changed, the permissions that were granted to the previous version of the procedure are
immediately removed.
Example: Tutorial: Signing Stored Procedures with a Certificate
Implement Transparent Data Encryption (TDE) with customer-managed keys in Azure Key Vault to enable
Separation of Duties between data owner and security owner.
See the article, Configure customer-managed keys for Azure Storage encryption from the Azure
portal.
To ensure that a DBA can't see data that is considered highly sensitive and can still do DBA tasks, you can
use Always Encrypted with role separation.
See the articles, Overview of Key Management for Always Encrypted, Key Provisioning with Role
Separation, and Column Master Key Rotation with Role Separation.
In cases where the use of Always Encrypted isn't feasible, or at least not without major costs and efforts
that may even render the system near unusable, compromises can be made and mitigated through the
use of compensating controls such as:
Human intervention in processes.
Audit trails – for more information on Auditing, see, Audit critical security events.
Best practices :
Make sure that different accounts are used for Development/Test and Production environments. Different
accounts help to comply with separation of Test and Production systems.
Refrain from assigning permissions to individual users. Use roles (database or server roles) consistently
instead. Having roles helps greatly with reporting and troubleshooting permissions.
Use built-in roles when the permissions match exactly the needed permissions – if the union of all
permissions from multiple built-in roles leads to a 100% match, you can assign multiple roles
concurrently as well.
Create and use user-defined roles when built-in roles grant too many permissions or insufficient
permissions.
Role assignments can also be done temporarily, also known as Dynamic Separation of Duties (DSD),
either within SQL Agent Job steps in T-SQL or using Azure PIM for Azure roles.
Make sure that DBAs don't have access to the encryption keys or key stores, and that Security
Administrators with access to the keys have no access to the database in turn. The use of Extensible Key
Management (EKM) can make this separation easier to achieve. Azure Key Vault can be used to
implement EKM.
Always make sure to have an Audit trail for security-related actions.
You can retrieve the definition of the Azure built-in roles to see the permissions used and create a custom
role based on excerpts and cumulations of these via PowerShell.
Because any member of the db_owner database role can change security settings like Transparent Data
Encryption (TDE), or change the SLO, this membership should be granted with care. However, there are
many tasks that require db_owner privileges. Task like changing any database setting such as changing
DB options. Auditing plays a key role in any solution.
It is not possible to restrict permissions of a db_owner, and therefore prevent an administrative account
from viewing user data. If there's highly sensitive data in a database, Always Encrypted can be used to
safely prevent db_owners or any other DBA from viewing it.

NOTE
Achieving Separation of Duties (SoD) is challenging for security-related or troubleshooting tasks. Other areas like
development and end-user roles are easier to segregate. Most compliance related controls allow the use of alternate
control functions such as Auditing when other solutions aren't practical.

For the readers that want to dive deeper into SoD, we recommend the following resources:
For Azure SQL Database and SQL Managed Instance:
Controlling and granting database access
Engine Separation of Duties for the Application Developer
Separation of Duties
Signing Stored Procedures
For Azure Resource Management:
Azure built-in roles
Azure custom roles
Using Azure AD Privileged Identity Management for elevated access
Perform regular code reviews
Mentioned in: PCI: 6.3.2, SOC: SDL-3

Separation of Duties is not limited to the data in a database, but includes application code. Malicious code can
potentially circumvent security controls. Before deploying custom code to production, it is essential to review
what's being deployed.
How to implement :
Use a database tool like Azure Data Studio that supports source control.
Implement a segregated code deployment process.
Before committing to main branch, a person (other than the author of the code itself) has to inspect the
code for potential elevation of privileges risks as well as malicious data modifications to protect against
fraud and rogue access. This can be done using source control mechanisms.
Best practices :
Standardization: It helps to implement a standard procedure that is to be followed for any code updates.
Vulnerability Assessment contains rules that check for excessive permissions, the use of old encryption
algorithms, and other security problems within a database schema.
Further checks can be done in a QA or test environment using Advanced Threat Protection that scans for
code that is vulnerable to SQL-injection.
Examples of what to look out for:
Creation of a user or changing security settings from within an automated SQL-code-update
deployment.
A stored procedure, which, depending on the parameters provided, updates a monetary value in a cell
in a non-conforming way.
Make sure the person conducting the review is an individual other than the originating code author and
knowledgeable in code-reviews and secure coding.
Be sure to know all sources of code-changes. Code can be in T-SQL Scripts. It can be ad-hoc commands
to be executed or be deployed in forms of Views, Functions, Triggers, and Stored Procedures. It can be
part of SQL Agent Job definitions (Steps). It can also be executed from within SSIS packages, Azure Data
Factory, and other services.

Data protection
Data protection is a set of capabilities for safeguarding important information from compromise by encryption
or obfuscation.

NOTE
Microsoft attests to Azure SQL Database and SQL Managed Instance as being FIPS 140-2 Level 1 compliant. This is done
after verifying the strict use of FIPS 140-2 Level 1 acceptable algorithms and FIPS 140-2 Level 1 validated instances of
those algorithms including consistency with required key lengths, key management, key generation, and key storage. This
attestation is meant to allow our customers to respond to the need or requirement for the use of FIPS 140-2 Level 1
validated instances in the processing of data or delivery of systems or applications. We define the terms "FIPS 140-2 Level
1 compliant" and "FIPS 140-2 Level 1 compliance" used in the above statement to demonstrate their intended
applicability to U.S. and Canadian government use of the different term "FIPS 140-2 Level 1 validated."

Encrypt data in transit


Mentioned in: OSA Practice #6, ISO Control Family: Cryptography

Protects your data while data moves between your client and server. Refer to Network Security.
Encrypt data at rest
Mentioned in: OSA Practice #6, ISO Control Family: Cryptography

Encryption at rest is the cryptographic protection of data when it is persisted in database, log, and backup files.
How to implement :
Transparent Database Encryption (TDE) with service managed keys are enabled by default for any databases
created after 2017 in Azure SQL Database and SQL Managed Instance.
In a managed instance, if the database is created from a restore operation using an on-premises server, the
TDE setting of the original database will be honored. If the original database doesn't have TDE enabled, we
recommend that TDE be manually turned on for the managed instance.
Best practices :
Don't store data that requires encryption-at-rest in the master database. The master database can't be
encrypted with TDE.
Use customer-managed keys in Azure Key Vault if you need increased transparency and granular control
over the TDE protection. Azure Key Vault allows the ability to revoke permissions at any time to render
the database inaccessible. You can centrally manage TDE protectors along with other keys, or rotate the
TDE protector at your own schedule using Azure Key Vault.
If you're using customer-managed keys in Azure Key Vault, follow the articles, Guidelines for configuring
TDE with Azure Key Vault and How to configure Geo-DR with Azure Key Vault.

NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.

Protect sensitive data in use from high-privileged, unauthorized users


Data in use is the data stored in memory of the database system during the execution of SQL queries. If your
database stores sensitive data, your organization may be required to ensure that high-privileged users are
prevented from viewing sensitive data in your database. High-privilege users, such as Microsoft operators or
DBAs in your organization should be able to manage the database, but prevented from viewing and potentially
exfiltrating sensitive data from the memory of the SQL process or by querying the database.
The policies that determine which data is sensitive and whether the sensitive data must be encrypted in memory
and not accessible to administrators in plaintext, are specific to your organization and compliance regulations
you need to adhere to. Please see the related requirement: Identify and tag sensitive data.
How to implement :
Use Always Encrypted to ensure sensitive data isn't exposed in plaintext in Azure SQL Database or SQL
Managed Instance, even in memory/in use. Always Encrypted protects the data from Database
Administrators (DBAs) and cloud admins (or bad actors who can impersonate high-privileged but
unauthorized users) and gives you more control over who can access your data.
Best practices :
Always Encrypted isn't a substitute to encrypt data at rest (TDE) or in transit (SSL/TLS). Always Encrypted
shouldn't be used for non-sensitive data to minimize performance and functionality impact. Using Always
Encrypted in conjunction with TDE and Transport Layer Security (TLS) is recommended for
comprehensive protection of data at-rest, in-transit, and in-use.
Assess the impact of encrypting the identified sensitive data columns before you deploy Always
Encrypted in a production database. In general, Always Encrypted reduces the functionality of queries on
encrypted columns and has other limitations, listed in Always Encrypted - Feature Details. Therefore, you
may need to rearchitect your application to reimplement the functionality, a query does not support, on
the client side or/and refactor your database schema, including the definitions of stored procedures,
functions, views and triggers. Existing applications may not work with encrypted columns if they do not
adhere to the restrictions and limitations of Always Encrypted. While the ecosystem of Microsoft tools,
products and services supporting Always Encrypted is growing, a number of them do not work with
encrypted columns. Encrypting a column may also impact query performance, depending on the
characteristics of your workload.
Manage Always Encrypted keys with role separation if you're using Always Encrypted to protect data
from malicious DBAs. With role separation, a security admin creates the physical keys. The DBA creates
the column master key and column encryption key metadata objects describing the physical keys in the
database. During this process, the security admin doesn't need access to the database, and the DBA
doesn't need access to the physical keys in plaintext.
See the article, Managing Keys with Role Separation for details.
Store your column master keys in Azure Key Vault for ease of management. Avoid using Windows
Certificate Store (and in general, distributed key store solutions, as opposed central key management
solutions) that make key management hard.
Think carefully through the tradeoffs of using multiple keys (column master key or column encryption
keys). Keep the number of keys small to reduce key management cost. One column master key and one
column encryption key per database is typically sufficient in steady-state environments (not in the middle
of a key rotation). You may need additional keys if you have different user groups, each using different
keys and accessing different data.
Rotate column master keys per your compliance requirements. If you also need to rotate column
encryption keys, consider using online encryption to minimize application downtime.
See the article, Performance and Availability Considerations.
Use deterministic encryption if computations (equality) on data need to be supported. Otherwise, use
randomized encryption. Avoid using deterministic encryption for low-entropy data sets, or data sets with
publicly known distribution.
If you're concerned about third parties accessing your data legally without your consent, ensure that all
application and tools that have access to the keys and data in plaintext run outside of Microsoft Azure
Cloud. Without access to the keys, the third party will have no way of decrypting the data unless they
bypass the encryption.
Always Encrypted doesn't easily support granting temporary access to the keys (and the protected data).
For example, if you need to share the keys with a DBA to allow the DBA to do some cleansing operations
on sensitive and encrypted data. The only way to reliably revoke the access to the data from the DBA will
be to rotate both the column encryption keys and the column master keys protecting the data, which is
an expensive operation.
To access the plaintext values in encrypted columns, a user needs to have access to the Column Master
Key (CMK) that protects columns, which is configured in the key store holding the CMK. The user also
needs to have the VIEW ANY COLUMN MASTER KEY DEFINITION and VIEW ANY COLUMN
ENCRYPTION KEY DEFINITION database permissions.
Control access of application users to sensitive data through encryption
Encryption can be used as a way to ensure that only specific application users who have access to cryptographic
keys can view or update the data.
How to implement :
Use Cell-level Encryption (CLE). See the article, Encrypt a Column of Data for details.
Use Always Encrypted, but be aware of its limitation. The limitations are listed below.
Best practices
When using CLE:
Control access to keys through SQL permissions and roles.
Use AES (AES 256 recommended) for data encryption. Algorithms, such RC4, DES and TripleDES, are
deprecated and shouldn't be used because of known vulnerabilities.
Protect symmetric keys with asymmetric keys/certificates (not passwords) to avoid using 3DES.
Be careful when migrating a database using Cell-Level Encryption via export/import (bacpac files).
See the article, Recommendations for using Cell Level Encryption in Azure SQL Database on how to
prevent losing keys when migrating data, and for other best practice guidance.
Keep in mind that Always Encrypted is primarily designed to protect sensitive data in use from high-privilege
users of Azure SQL Database (cloud operators, DBAs) - see Protect sensitive data in use from high-privileged,
unauthorized users. Be aware of the following challenges when using Always Encrypted to protect data from
application users:
By default, all Microsoft client drivers supporting Always Encrypted maintain a global (one per application)
cache of column encryption keys. Once a client driver acquires a plaintext column encryption key by
contacting a key store holding a column master key, the plaintext column encryption key is cached. This
makes isolating data from users of a multi-user application challenging. If your application impersonates end
users when interacting with a key store (such as Azure Key Vault), after a user's query populates the cache
with a column encryption key, a subsequent query that requires the same key but is triggered by another
user will use the cached key. The driver won't call the key store and it won't check if the second user has a
permission to access the column encryption key. As a result, the user can see the encrypted data even if the
user doesn't have access to the keys. To achieve the isolation of users within a multi-user application, you can
disable column encryption key caching. Disabling caching will cause additional performance overheads, as
the driver will need to contact the key store for each data encryption or decryption operation.
Protect data against unauthorized viewing by application users while preserving data format
Another technique for preventing unauthorized users from viewing data is to obfuscate or mask the data while
preserving data types and formats to ensure that user applications can continue handle and display the data.
How to implement :
Use Dynamic Data Masking to obfuscate table columns.

NOTE
Always Encrypted does not work with Dynamic Data Masking. It is not possible to encrypt and mask the same column,
which implies that you need to prioritize protecting data in use vs. masking the data for your app users via Dynamic Data
Masking.

Best practices :

NOTE
Dynamic Data Masking cannot be used to protect data from high-privilege users. Masking policies do not apply to users
with administrative access like db_owner.

Don't permit app users to run ad-hoc queries (as they may be able to work around Dynamic Data
Masking).
See the article, Bypassing masking using inference or brute-force techniques for details.
Use a proper access control policy (via SQL permissions, roles, RLS) to limit user permissions to make
updates in the masked columns. Creating a mask on a column doesn't prevent updates to that column.
Users that receive masked data when querying the masked column, can update the data if they have
write-permissions.
Dynamic Data Masking doesn't preserve the statistical properties of the masked values. This may impact
query results (for example, queries containing filtering predicates or joins on the masked data).

Network security
Network security refers to access controls and best practices to secure your data in transit to Azure SQL
Database.
Configure my client to connect securely to SQL Database/SQL Managed Instance
Best practices on how to prevent client machines and applications with well-known vulnerabilities (for example,
using older TLS protocols and cipher suites) from connecting to Azure SQL Database and SQL Managed
Instance.
How to implement :
Ensure that client machines connecting to Azure SQL Database and SQL Managed Instance are using the
latest Transport Layer Security (TLS) version.
Best practices :
Enforce a minimal TLS version at the SQL Database server or SQL Managed Instance level using the
minimal TLS version setting. We recommend setting the minimal TLS version to 1.2, after testing to
confirm your applications supports it. TLS 1.2 includes fixes for vulnerabilities found in previous versions.
Configure all your apps and tools to connect to SQL Database with encryption enabled
Encrypt = On, TrustServerCertificate = Off (or equivalent with non-Microsoft drivers).
If your app uses a driver that doesn't support TLS or supports an older version of TLS, replace the driver,
if possible. If not possible, carefully evaluate the security risks.
Reduce attack vectors via vulnerabilities in SSL 2.0, SSL 3.0, TLS 1.0, and TLS 1.1 by disabling them on
client machines connecting to Azure SQL Database per Transport Layer Security (TLS) registry
settings.
Check cipher suites available on the client: Cipher Suites in TLS/SSL (Schannel SSP). Specifically,
disable 3DES per Configuring TLS Cipher Suite Order.
Minimize attack surface
Minimize the number of features that can be attacked by a malicious user. Implement network access controls
for Azure SQL Database.

Mentioned in: OSA Practice #5

How to implement :
In SQL Database:
Set Allow Access to Azure services to OFF at the server-level
Use VNet Service endpoints and VNet Firewall Rules.
Use Private Link.
In SQL Managed Instance:
Follow the guidelines in Network requirements.
Best practices :
Restricting access to Azure SQL Database and SQL Managed Instance by connecting on a private
endpoint (for example, using a private data path):
A managed instance can be isolated inside a virtual network to prevent external access. Applications
and tools that are in the same or peered virtual network in the same region could access it directly.
Applications and tools that are in different region could use virtual-network-to-virtual-network
connection or ExpressRoute circuit peering to establish connection. Customer should use Network
Security Groups (NSG) to restrict access over port 1433 only to resources that require access to a
managed instance.
For a SQL Database, use the Private Link feature that provides a dedicated private IP for the server
inside your virtual network. You can also use Virtual network service endpoints with virtual network
firewall rules to restrict access to your servers.
Mobile users should use point-to-site VPN connections to connect over the data path.
Users connected to their on-premises network should use site-to-site VPN connection or
ExpressRoute to connect over the data path.
You can access Azure SQL Database and SQL Managed Instance by connecting to a public endpoint (for
example, using a public data path). The following best practices should be considered:
For a server in SQL Database, use IP firewall rules to restrict access to only authorized IP addresses.
For SQL Managed Instance, use Network Security Groups (NSG) to restrict access over port 3342 only
to required resources. For more information, see Use a managed instance securely with public
endpoints.

NOTE
The SQL Managed Instance public endpoint is not enabled by default and it and must be explicitly enabled. If company
policy disallows the use of public endpoints, use Azure Policy to prevent enabling public endpoints in the first place.

Set up Azure Networking components:


Follow Azure best practices for network security.
Plan Virtual Network configuration per best practices outlined in Azure Virtual Network frequently
asked questions (FAQ) and plan.
Segment a virtual network into multiple subnets and assign resources for similar role to the same
subnet (for example, front-end vs back-end resources).
Use Network Security Groups (NSGs) to control traffic between subnets inside the Azure virtual
network boundary.
Enable Azure Network Watcher for your subscription to monitor inbound and outbound network
traffic.
Configure Power BI for secure connections to SQL Database/SQL Managed Instance
Best practices :
For Power BI Desktop, use private data path whenever possible.
Ensure that Power BI Desktop is connecting using TLS1.2 by setting the registry key on the client machine
as per Transport Layer Security (TLS) registry settings.
Restrict data access for specific users via Row-level security (RLS) with Power BI.
For Power BI Service, use the on-premises data gateway, keeping in mind Limitations and Considerations.
Configure App Service for secure connections to SQL Database/SQL Managed Instance
Best practices :
For a simple Web App, connecting over public endpoint requires setting Allow Azure Ser vices to ON.
Integrate your app with an Azure Virtual Network for private data path connectivity to a managed
instance. Optionally, you can also deploy a Web App with App Service Environments (ASE).
For Web App withASEorvirtual network Integrated Web Appconnecting to a database in SQL Database,
you can use virtual network service endpoints and virtual network firewall rules to limit access from a
specific virtual network and subnet. Then set Allow Azure Ser vices to OFF. You can also connect ASE to
a managed instance in SQL Managed Instance over a private data path.
Ensure that your Web App is configured per the article, Best practices for securing platform as a service
(PaaS) web and mobile applications using Azure App Service.
Install Web Application Firewall (WAF) to protect your web app from common exploits and vulnerabilities.
Configure Azure virtual machine hosting for secure connections to SQL Database/SQL Managed Instance
Best practices :
Use a combination of Allow and Deny rules on the NSGs of Azure virtual machines to control which
regions can be accessed from the VM.
Ensure that your VM is configured per the article, Security best practices for IaaS workloads in Azure.
Ensure that all VMs are associated with a specific virtual network and subnet.
Evaluate if you need the default route 0.0.0.0/Internet per the guidance at about forced tunneling.
If yes – for example, front-end subnet - then keep the default route.
If no – for example, middle tier or back-end subnet – then enable force tunneling so no traffic goes
over Internet to reach on-premises (a.k.a cross-premises).
Implement optional default routes if you're using peering or connecting to on-premises.
Implement User Defined Routes if you need to send all traffic in the virtual network to a Network Virtual
Appliance for packet inspection.
Use virtual network service endpoints for secure access to PaaS services like Azure Storage via the Azure
backbone network.
Protect against Distributed Denial of Service (DDoS ) attacks
Distributed Denial of Service (DDoS) attacks are attempts by a malicious user to send a flood of network traffic
to Azure SQL Database with the aim of overwhelming the Azure infrastructure and causing it to reject valid
logins and workload.

Mentioned in: OSA Practice #9

How to implement :
DDoS protection is automatically enabled as part of the Azure Platform. It includes always-on traffic monitoring
and real-time mitigation of network-level attacks on public endpoints.
Use Azure DDoS Protection to monitor public IP addresses associated to resources deployed in virtual
networks.
Use Advanced Threat Protection for Azure SQL Database to detect Denial of Service (DoS) attacks against
databases.
Best practices :
Follow the practices described in Minimize Attack Surface helps minimize DDoS attack threats.
The Advanced Threat Protection Brute force SQL credentials alert helps to detect brute force attacks.
In some cases, the alert can even distinguish penetration testing workloads.
For Azure VM hosting applications connecting to SQL Database:
Follow recommendation to Restrict access through Internet-facing endpoints in Microsoft Defender
for Cloud.
Use virtual machine scale sets to run multiple instances of your application on Azure VMs.
Disable RDP and SSH from Internet to prevent brute force attack.

Monitoring, Logging, and Auditing


This section refers to capabilities to help you detect anomalous activities indicating unusual and potentially
harmful attempts to access or exploit databases. It also describes best practices to configure database auditing
to track and capture database events.
Protect databases against attacks
Advanced threat protection enables you to detect and respond to potential threats as they occur by providing
security alerts on anomalous activities.
How to implement :
Use Advanced Threat Protection for SQL to detect unusual and potentially harmful attempts to access or
exploit databases, including:
SQL injection attack.
Credentials theft/leak.
Privilege abuse.
Data exfiltration.
Best practices :
Configure Microsoft Defender for SQLfor a specific server or a managed instance. You can also configure
Microsoft Defender for SQL for all servers and managed instances in a subscription by enabling
Microsoft Defender for Cloud.
For a full investigation experience, it's recommended to enableSQL Database Auditing. With auditing, you
can track database events and write them to an audit log in an Azure Storage account or Azure Log
Analytics workspace.
Audit critical security events
Tracking of database events helps you understand database activity. You can gain insight into discrepancies and
anomalies that could indicate business concerns or suspected security violations. It also enables and facilitates
adherence to compliance standards.
How to implement :
EnableSQL Database Auditing or Managed Instance Auditing to track database events and write them to
an audit log in your Azure Storage account, Log Analytics workspace (preview), or Event Hubs (preview).
Audit logs can be written to an Azure Storage account, to a Log Analytics workspace for consumption by
Azure Monitor logs, or to event hub for consumption using event hub. You can configure any
combination of these options, and audit logs will be written to each.
Best practices :
By configuring SQL Database Auditing on your server or Managed Instance Auditing to audit events, all
existing and newly created databases on that server will be audited.
By default auditing policy includes all actions (queries, stored procedures and successful and failed logins)
against the databases, which may result in high volume of audit logs. It's recommended for customers to
configure auditing for different types of actions and action groups using PowerShell. Configuring this will
help control the number of audited actions, and minimize the risk of event loss. Custom audit configurations
allow customers to capture only the audit data that is needed.
Audit logs can be consumed directly in the Azure portal, or from the storage location that was configured.

NOTE
Enabling auditing to Log Analytics will incur cost based on ingestion rates. Please be aware of the associated cost with
using this option, or consider storing the audit logs in an Azure storage account.

Fur ther resources :


SQL Database Auditing
SQL Server Auditing
Secure audit logs
Restrict access to the storage account to support Separation of Duties and to separate DBA from Auditors.
How to implement :
When saving Audit logs to Azure Storage, make sure that access to the Storage Account is restricted to the
minimal security principles. Control who has access to the storage account.
For more information, see Authorizing access to Azure Storage.
Best practices :
Controlling Access to the Audit Target is a key concept in separating DBA from Auditors.
When auditing access to sensitive data, consider securing the data with data encryption to avoid
information leakage to the Auditor. For more information, see the section Protect sensitive data in use
from high-privileged, unauthorized users.

Security Management
This section describes the different aspects and best practices for managing your databases security posture. It
includes best practices for ensuring your databases are configured to meet security standards, for discovering
and for classifying and tracking access to potentially sensitive data in your databases.
Ensure that the databases are configured to meet security best practices
Proactively improve your database security by discovering and remediating potential database vulnerabilities.
How to implement :
Enable SQL Vulnerability Assessment (VA) to scan your database for security issues, and to automatically run
periodically on your databases.
Best practices :
Initially, run VA on your databases and iterate by remediating failing checks that oppose security best
practices. Set up baselines for acceptable configurations until the scan comes out clean, or all checks has
passed.
Configure periodic recurring scans to run once a week and configure the relevant person to receive
summary emails.
Review the VA summary following each weekly scan. For any vulnerabilities found, evaluate the drift from
the previous scan result and determine if the check should be resolved. Review if there's a legitimate
reason for the change in configuration.
Resolve checks and update baselines where relevant. Create ticket items for resolving actions and track
these until they're resolved.
Fur ther resources :
SQL Vulnerability Assessment
SQL Vulnerability Assessment service helps you identify database vulnerabilities
Identify and tag sensitive data
Discover columns that potentially contain sensitive data. What is considered sensitive data heavily depends on
the customer, compliance regulation, etc., and needs to be evaluated by the users in charge of that data. Classify
the columns to use advanced sensitivity-based auditing and protection scenarios.
How to implement :
Use SQL Data Discovery and Classification to discover, classify, label, and protect the sensitive data in your
databases.
View the classification recommendations that are created by the automated discovery in the SQL Data
Discovery and Classification dashboard. Accept the relevant classifications, such that your sensitive
data is persistently tagged with classification labels.
Manually add classifications for any additional sensitive data fields that were not discovered by the
automated mechanism.
For more information, see SQL Data Discovery and Classification.
Best practices :
Monitor the classification dashboard on a regular basis for an accurate assessment of the database's
classification state. A report on the database classification state can be exported or printed to share for
compliance and auditing purposes.
Continuously monitor the status of recommended sensitive data in SQL Vulnerability Assessment. Track
the sensitive data discovery rule and identify any drift in the recommended columns for classification.
Use classification in a way that is tailored to the specific needs of your organization. Customize your
Information Protection policy (sensitivity labels, information types, discovery logic) in the SQL
Information Protection policy in Microsoft Defender for Cloud.
Track access to sensitive data
Monitor who accesses sensitive data and capture queries on sensitive data in audit logs.
How to implement :
Use SQL Audit and Data Classification in combination.
In your SQL Database Audit log, you can track access specifically to sensitive data. You can also view
information such as the data that was accessed, as well as its sensitivity label. For more information,
see Data Discovery and Classification and Auditing access to sensitive data.
Best practices :
See best practices for the Auditing and Data Classification sections:
Audit critical security events
Identify and tag sensitive data
Visualize security and compliance status
Use a unified infrastructure security management system that strengthens the security posture of your data
centers (including databases in SQL Database). View a list of recommendations concerning the security of your
databases and compliance status.
How to implement :
Monitor SQL-related security recommendations and active threats in Microsoft Defender for Cloud.

Common security threats and potential mitigations


This section helps you find security measures to protect against certain attack vectors. It's expected that most
mitigations can be achieved by following one or more of the security guidelines above.
Security threat: Data exfiltration
Data exfiltration is the unauthorized copying, transfer, or retrieval of data from a computer or server. See a
definition for data exfiltration on Wikipedia.
Connecting to server over a public endpoint presents a data exfiltration risk as it requires customers open their
firewalls to public IPs.
Scenario 1 : An application on an Azure VM connects to a database in Azure SQL Database. A rogue actor gets
access to the VM and compromises it. In this scenario, data exfiltration means that an external entity using the
rogue VM connects to the database, copies personal data, and stores it in a blob storage or a different SQL
Database in a different subscription.
Scenario 2 : A Rouge DBA. This scenario is often raised by security sensitive customers from regulated
industries. In this scenario, a high privilege user might copy data from Azure SQL Database to another
subscription not controlled by the data owner.
Potential mitigations :
Today, Azure SQL Database and SQL Managed Instance offers the following techniques for mitigating data
exfiltration threats:
Use a combination of Allow and Deny rules on the NSGs of Azure VMs to control which regions can be
accessed from the VM.
If using a server in SQL Database, set the following options:
Allow Azure Services to OFF.
Only allow traffic from the subnet containing your Azure VM by setting up a VNet Firewall rule.
Use Private Link
For SQL Managed Instance, using private IP access by default addresses the first data exfiltration concern of a
rogue VM. Turn on the subnet delegation feature on a subnet to automatically set the most restrictive policy
on a SQL Managed Instance subnet.
The Rogue DBA concern is more exposed with SQL Managed Instance as it has a larger surface area and
networking requirements are visible to customers. The best mitigation for this is applying all of the practices
in this security guide to prevent the Rogue DBA scenario in the first place (not only for data exfiltration).
Always Encrypted is one method to protect sensitive data by encrypting it and keeping the key inaccessible
for the DBA.

Security aspects of business continuity and availability


Most security standards address data availability in terms of operational continuity, achieved by implementing
redundancy and fail-over capabilities to avoid single points of failure. For disaster scenarios, it's a common
practice to keep backups of Data and Log files.The following section provides a high-level overview of the
capabilities that are built-into Azure. It also provides additional options that can be configured to meet specific
needs:
Azure offers built-in high-availability: High-availability with SQL Database and SQL Managed Instance
The Business Critical tier includes failover groups, full and differential log backups, and point-in-time-
restore backups enabled by default:
Automated backups
Recover a database using automated database backups - Point-in-time restore
Additional business continuity features such as the zone redundant configuration and auto-failover
groups across different Azure geos can be configured:
High-availability - Zone redundant configuration for Premium & Business Critical service tiers
High-availability - Zone redundant configuration for General Purpose service tier
Overview of business continuity

Next steps
See An overview of Azure SQL Database security capabilities
Azure Policy Regulatory Compliance controls for
Azure SQL Database & SQL Managed Instance
7/12/2022 • 52 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Regulatory Compliance in Azure Policy provides Microsoft created and managed initiative definitions, known as
built-ins, for the compliance domains and security controls related to different compliance standards. This
page lists the compliance domains and security controls for Azure SQL Database and SQL Managed
Instance. You can assign the built-ins for a security control individually to help make your Azure resources
compliant with the specific standard.
The title of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the
Policy Version column to view the source on the Azure Policy GitHub repo.

IMPORTANT
Each control below is associated with one or more Azure Policy definitions. These policies may help you assess compliance
with the control; however, there often is not a one-to-one or complete match between a control and one or more policies.
As such, Compliant in Azure Policy refers only to the policies themselves; this doesn't ensure you're fully compliant with
all requirements of a control. In addition, the compliance standard includes controls that aren't addressed by any Azure
Policy definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your overall compliance status.
The associations between controls and Azure Policy Regulatory Compliance definitions for these compliance standards
may change over time.

Australian Government ISM PROTECTED


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - Australian Government ISM PROTECTED. For more information about this
compliance standard, see Australian Government ISM PROTECTED.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Guidelines for System 940 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 940 vulnerability findings
resolved

Guidelines for System 940 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 940 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 940 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 940 be enabled on SQL
Managed Instance
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Guidelines for System 940 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 940 be enabled on your
SQL servers

Guidelines for System 1144 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 1144 vulnerability findings
resolved

Guidelines for System 1144 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 1144 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 1144 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 1144 be enabled on SQL
Managed Instance

Guidelines for System 1144 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 1144 be enabled on your
SQL servers

Guidelines for 1260 Database An Azure Active 1.0.0


Database Systems - administrator Directory
Database accounts - 1260 administrator should
management system be provisioned for
software SQL servers

Guidelines for 1261 Database An Azure Active 1.0.0


Database Systems - administrator Directory
Database accounts - 1261 administrator should
management system be provisioned for
software SQL servers

Guidelines for 1262 Database An Azure Active 1.0.0


Database Systems - administrator Directory
Database accounts - 1262 administrator should
management system be provisioned for
software SQL servers

Guidelines for 1263 Database An Azure Active 1.0.0


Database Systems - administrator Directory
Database accounts - 1263 administrator should
management system be provisioned for
software SQL servers

Guidelines for 1264 Database An Azure Active 1.0.0


Database Systems - administrator Directory
Database accounts - 1264 administrator should
management system be provisioned for
software SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Guidelines for 1425 Protecting database Transparent Data 2.0.0


Database Systems - server contents - Encryption on SQL
Database servers 1425 databases should be
enabled

Guidelines for System 1472 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 1472 vulnerability findings
resolved

Guidelines for System 1472 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 1472 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 1472 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 1472 be enabled on SQL
Managed Instance

Guidelines for System 1472 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 1472 be enabled on your
SQL servers

Guidelines for System 1494 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 1494 vulnerability findings
resolved

Guidelines for System 1494 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 1494 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 1494 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 1494 be enabled on SQL
Managed Instance

Guidelines for System 1494 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 1494 be enabled on your
SQL servers

Guidelines for System 1495 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 1495 vulnerability findings
resolved
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Guidelines for System 1495 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 1495 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 1495 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 1495 be enabled on SQL
Managed Instance

Guidelines for System 1495 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 1495 be enabled on your
SQL servers

Guidelines for System 1496 When to patch SQL databases 4.0.0


Management - security should have
System patching vulnerabilities - 1496 vulnerability findings
resolved

Guidelines for System 1496 When to patch Vulnerability 2.0.0


Management - security Assessment settings
System patching vulnerabilities - 1496 for SQL server
should contain an
email address to
receive scan reports

Guidelines for System 1496 When to patch Vulnerability 1.0.1


Management - security assessment should
System patching vulnerabilities - 1496 be enabled on SQL
Managed Instance

Guidelines for System 1496 When to patch Vulnerability 2.0.0


Management - security assessment should
System patching vulnerabilities - 1496 be enabled on your
SQL servers

Guidelines for System 1537 Events to be logged - Azure Defender for 2.0.1
Monitoring - Event 1537 SQL should be
logging and auditing enabled for
unprotected Azure
SQL servers

Guidelines for System 1537 Events to be logged - Azure Defender for 1.0.2
Monitoring - Event 1537 SQL should be
logging and auditing enabled for
unprotected SQL
Managed Instances

Azure Security Benchmark


The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on
Azure. To see how this service completely maps to the Azure Security Benchmark, see the Azure Security
Benchmark mapping files.
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - Azure Security Benchmark.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Network Security NS-2 Secure cloud services Private endpoint 1.1.0


with network connections on Azure
controls SQL Database should
be enabled

Network Security NS-2 Secure cloud services Public network access 1.1.0
with network on Azure SQL
controls Database should be
disabled

Identity IM-1 Use centralized An Azure Active 1.0.0


Management identity and Directory
authentication administrator should
system be provisioned for
SQL servers

Data Protection DP-2 Monitor anomalies Azure Defender for 1.0.2


and threats targeting SQL should be
sensitive data enabled for
unprotected SQL
Managed Instances

Data Protection DP-4 Enable data at rest Transparent Data 2.0.0


encryption by default Encryption on SQL
databases should be
enabled

Data Protection DP-5 Use customer- SQL managed 2.0.0


managed key option instances should use
in data at rest customer-managed
encryption when keys to encrypt data
required at rest

Data Protection DP-5 Use customer- SQL servers should 2.0.1


managed key option use customer-
in data at rest managed keys to
encryption when encrypt data at rest
required

Logging and Threat LT-1 Enable threat Azure Defender for 2.0.1
Detection detection capabilities SQL should be
enabled for
unprotected Azure
SQL servers

Logging and Threat LT-1 Enable threat Azure Defender for 1.0.2
Detection detection capabilities SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Logging and Threat LT-2 Enable threat Azure Defender for 2.0.1
Detection detection for identity SQL should be
and access enabled for
management unprotected Azure
SQL servers

Logging and Threat LT-2 Enable threat Azure Defender for 1.0.2
Detection detection for identity SQL should be
and access enabled for
management unprotected SQL
Managed Instances

Logging and Threat LT-3 Enable logging for Auditing on SQL 2.0.0
Detection security investigation server should be
enabled

Logging and Threat LT-6 Configure log SQL servers with 3.0.0
Detection storage retention auditing to storage
account destination
should be configured
with 90 days
retention or higher

Incident Response IR-3 Detection and Azure Defender for 2.0.1


analysis - create SQL should be
incidents based on enabled for
high-quality alerts unprotected Azure
SQL servers

Incident Response IR-3 Detection and Azure Defender for 1.0.2


analysis - create SQL should be
incidents based on enabled for
high-quality alerts unprotected SQL
Managed Instances

Incident Response IR-5 Detection and Azure Defender for 2.0.1


analysis - prioritize SQL should be
incidents enabled for
unprotected Azure
SQL servers

Incident Response IR-5 Detection and Azure Defender for 1.0.2


analysis - prioritize SQL should be
incidents enabled for
unprotected SQL
Managed Instances

Posture and PV-5 Perform vulnerability Vulnerability 1.0.1


Vulnerability assessments assessment should
Management be enabled on SQL
Managed Instance

Posture and PV-5 Perform vulnerability Vulnerability 2.0.0


Vulnerability assessments assessment should
Management be enabled on your
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Posture and PV-6 Rapidly and SQL databases 4.0.0


Vulnerability automatically should have
Management remediate vulnerability findings
vulnerabilities resolved

Azure Security Benchmark v1


The Azure Security Benchmark provides recommendations on how you can secure your cloud solutions on
Azure. To see how this service completely maps to the Azure Security Benchmark, see the Azure Security
Benchmark mapping files.
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - Azure Security Benchmark.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Network Security 1.1 Protect resources SQL Server should 1.0.0


using Network use a virtual network
Security Groups or service endpoint
Azure Firewall on
your Virtual Network

Logging and 2.3 Enable audit logging Auditing on SQL 2.0.0


Monitoring for Azure resources server should be
enabled

Logging and 2.3 Enable audit logging SQL Auditing 1.0.0


Monitoring for Azure resources settings should have
Action-Groups
configured to capture
critical activities

Logging and 2.5 Configure security SQL servers with 3.0.0


Monitoring log storage retention auditing to storage
account destination
should be configured
with 90 days
retention or higher

Logging and 2.7 Enable alerts for Azure Defender for 2.0.1
Monitoring anomalous activity SQL should be
enabled for
unprotected Azure
SQL servers

Logging and 2.7 Enable alerts for Azure Defender for 1.0.2
Monitoring anomalous activity SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Identity and Access 3.9 Use Azure Active An Azure Active 1.0.0
Control Directory Directory
administrator should
be provisioned for
SQL servers

Data Protection 4.5 Use an active Azure Defender for 2.0.1


discovery tool to SQL should be
identify sensitive data enabled for
unprotected Azure
SQL servers

Data Protection 4.5 Use an active Azure Defender for 1.0.2


discovery tool to SQL should be
identify sensitive data enabled for
unprotected SQL
Managed Instances

Data Protection 4.8 Encrypt sensitive SQL managed 2.0.0


information at rest instances should use
customer-managed
keys to encrypt data
at rest

Data Protection 4.8 Encrypt sensitive SQL servers should 2.0.1


information at rest use customer-
managed keys to
encrypt data at rest

Data Protection 4.8 Encrypt sensitive Transparent Data 2.0.0


information at rest Encryption on SQL
databases should be
enabled

Vulnerability 5.1 Run automated Vulnerability 1.0.1


Management vulnerability scanning assessment should
tools be enabled on SQL
Managed Instance

Vulnerability 5.1 Run automated Vulnerability 2.0.0


Management vulnerability scanning assessment should
tools be enabled on your
SQL servers

Vulnerability 5.5 Use a risk-rating SQL databases 4.0.0


Management process to prioritize should have
the remediation of vulnerability findings
discovered resolved
vulnerabilities

Data Recovery 9.1 Ensure regular Long-term geo- 2.0.0


automated back ups redundant backup
should be enabled
for Azure SQL
Databases
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Data Recovery 9.2 Perform complete Long-term geo- 2.0.0


system backups and redundant backup
backup any customer should be enabled
managed keys for Azure SQL
Databases

Canada Federal PBMM


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - Canada Federal PBMM. For more information about this compliance
standard, see Canada Federal PBMM.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control AC-2(7) Account An Azure Active 1.0.0


Management | Role- Directory
Based Schemes administrator should
be provisioned for
SQL servers

Audit and AU-5 Response to Audit Auditing on SQL 2.0.0


Accountability Processing Failures server should be
enabled

Audit and AU-5 Response to Audit Azure Defender for 2.0.1


Accountability Processing Failures SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-5 Response to Audit Azure Defender for 1.0.2


Accountability Processing Failures SQL should be
enabled for
unprotected SQL
Managed Instances

Audit and AU-12 Audit Generation Auditing on SQL 2.0.0


Accountability server should be
enabled

Audit and AU-12 Audit Generation Azure Defender for 2.0.1


Accountability SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-12 Audit Generation Azure Defender for 1.0.2


Accountability SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Risk Assessment RA-5 Vulnerability Azure Defender for 2.0.1


Scanning SQL should be
enabled for
unprotected Azure
SQL servers

Risk Assessment RA-5 Vulnerability Azure Defender for 1.0.2


Scanning SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability SQL databases 4.0.0


Scanning should have
vulnerability findings
resolved

System and SC-28 Protection of Azure Defender for 2.0.1


Communications Information at Rest SQL should be
Protection enabled for
unprotected Azure
SQL servers

System and SC-28 Protection of Azure Defender for 1.0.2


Communications Information at Rest SQL should be
Protection enabled for
unprotected SQL
Managed Instances

System and SC-28 Protection of Transparent Data 2.0.0


Communications Information at Rest Encryption on SQL
Protection databases should be
enabled

System and SI-2 Flaw Remediation SQL databases 4.0.0


Information Integrity should have
vulnerability findings
resolved

System and SI-4 Information System Azure Defender for 2.0.1


Information Integrity Monitoring SQL should be
enabled for
unprotected Azure
SQL servers

System and SI-4 Information System Azure Defender for 1.0.2


Information Integrity Monitoring SQL should be
enabled for
unprotected SQL
Managed Instances

CIS Microsoft Azure Foundations Benchmark 1.1.0


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - CIS Microsoft Azure Foundations Benchmark 1.1.0. For more information
about this compliance standard, see CIS Microsoft Azure Foundations Benchmark.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Security Center CIS Microsoft Azure Ensure ASC Default Auditing on SQL 2.0.0
Foundations policy setting server should be
Benchmark "Monitor SQL enabled
recommendation Auditing" is not
2.14 "Disabled"

Security Center CIS Microsoft Azure Ensure ASC Default Transparent Data 2.0.0
Foundations policy setting Encryption on SQL
Benchmark "Monitor SQL databases should be
recommendation Encryption" is not enabled
2.15 "Disabled"

Database Services CIS Microsoft Azure Ensure that 'Auditing' Auditing on SQL 2.0.0
Foundations is set to 'On' server should be
Benchmark enabled
recommendation 4.1

Database Services CIS Microsoft Azure Ensure SQL server's SQL managed 2.0.0
Foundations TDE protector is instances should use
Benchmark encrypted with BYOK customer-managed
recommendation (Use your own key) keys to encrypt data
4.10 at rest

Database Services CIS Microsoft Azure Ensure SQL server's SQL servers should 2.0.1
Foundations TDE protector is use customer-
Benchmark encrypted with BYOK managed keys to
recommendation (Use your own key) encrypt data at rest
4.10

Database Services CIS Microsoft Azure Ensure that SQL Auditing 1.0.0
Foundations 'AuditActionGroups' settings should have
Benchmark in 'auditing' policy for Action-Groups
recommendation 4.2 a SQL server is set configured to capture
properly critical activities

Database Services CIS Microsoft Azure Ensure that 'Auditing' SQL servers with 3.0.0
Foundations Retention is 'greater auditing to storage
Benchmark than 90 days' account destination
recommendation 4.3 should be configured
with 90 days
retention or higher

Database Services CIS Microsoft Azure Ensure that Azure Defender for 2.0.1
Foundations 'Advanced Data SQL should be
Benchmark Security' on a SQL enabled for
recommendation 4.4 server is set to 'On' unprotected Azure
SQL servers

Database Services CIS Microsoft Azure Ensure that Azure Defender for 1.0.2
Foundations 'Advanced Data SQL should be
Benchmark Security' on a SQL enabled for
recommendation 4.4 server is set to 'On' unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Database Services CIS Microsoft Azure Ensure that Azure An Azure Active 1.0.0
Foundations Active Directory Directory
Benchmark Admin is configured administrator should
recommendation 4.8 be provisioned for
SQL servers

Database Services CIS Microsoft Azure Ensure that 'Data Transparent Data 2.0.0
Foundations encryption' is set to Encryption on SQL
Benchmark 'On' on a SQL databases should be
recommendation 4.9 Database enabled

CIS Microsoft Azure Foundations Benchmark 1.3.0


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - CIS Microsoft Azure Foundations Benchmark 1.3.0. For more information
about this compliance standard, see CIS Microsoft Azure Foundations Benchmark.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Database Services CIS Microsoft Azure Ensure that 'Auditing' Auditing on SQL 2.0.0
Foundations is set to 'On' server should be
Benchmark enabled
recommendation
4.1.1

Database Services CIS Microsoft Azure Ensure that 'Data Transparent Data 2.0.0
Foundations encryption' is set to Encryption on SQL
Benchmark 'On' on a SQL databases should be
recommendation Database enabled
4.1.2

Database Services CIS Microsoft Azure Ensure that 'Auditing' SQL servers with 3.0.0
Foundations Retention is 'greater auditing to storage
Benchmark than 90 days' account destination
recommendation should be configured
4.1.3 with 90 days
retention or higher

Database Services CIS Microsoft Azure Ensure that Azure Defender for 2.0.1
Foundations Advanced Threat SQL should be
Benchmark Protection (ATP) on a enabled for
recommendation SQL server is set to unprotected Azure
4.2.1 'Enabled' SQL servers

Database Services CIS Microsoft Azure Ensure that Azure Defender for 1.0.2
Foundations Advanced Threat SQL should be
Benchmark Protection (ATP) on a enabled for
recommendation SQL server is set to unprotected SQL
4.2.1 'Enabled' Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Database Services CIS Microsoft Azure Ensure that Vulnerability 1.0.1


Foundations Vulnerability assessment should
Benchmark Assessment (VA) is be enabled on SQL
recommendation enabled on a SQL Managed Instance
4.2.2 server by setting a
Storage Account

Database Services CIS Microsoft Azure Ensure that Vulnerability 2.0.0


Foundations Vulnerability assessment should
Benchmark Assessment (VA) is be enabled on your
recommendation enabled on a SQL SQL servers
4.2.2 server by setting a
Storage Account

Database Services CIS Microsoft Azure Ensure that VA Vulnerability 2.0.0


Foundations setting Send scan Assessment settings
Benchmark reports to is for SQL server
recommendation configured for a SQL should contain an
4.2.4 server email address to
receive scan reports

Database Services CIS Microsoft Azure Ensure that Azure An Azure Active 1.0.0
Foundations Active Directory Directory
Benchmark Admin is configured administrator should
recommendation 4.4 be provisioned for
SQL servers

Database Services CIS Microsoft Azure Ensure SQL server's SQL managed 2.0.0
Foundations TDE protector is instances should use
Benchmark encrypted with customer-managed
recommendation 4.5 Customer-managed keys to encrypt data
key at rest

Database Services CIS Microsoft Azure Ensure SQL server's SQL servers should 2.0.1
Foundations TDE protector is use customer-
Benchmark encrypted with managed keys to
recommendation 4.5 Customer-managed encrypt data at rest
key

CMMC Level 3
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - CMMC Level 3. For more information about this compliance standard, see
Cybersecurity Maturity Model Certification (CMMC).

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control AC.1.001 Limit information Public network access 1.1.0


system access to on Azure SQL
authorized users, Database should be
processes acting on disabled
behalf of authorized
users, and devices
(including other
information systems).
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Access Control AC.1.002 Limit information Public network access 1.1.0


system access to the on Azure SQL
types of transactions Database should be
and functions that disabled
authorized users are
permitted to execute.

Access Control AC.2.016 Control the flow of Public network access 1.1.0
CUI in accordance on Azure SQL
with approved Database should be
authorizations. disabled

Audit and AU.2.041 Ensure that the Auditing on SQL 2.0.0


Accountability actions of individual server should be
system users can be enabled
uniquely traced to
those users so they
can be held
accountable for their
actions.

Audit and AU.2.041 Ensure that the Azure Defender for 2.0.1
Accountability actions of individual SQL should be
system users can be enabled for
uniquely traced to unprotected Azure
those users so they SQL servers
can be held
accountable for their
actions.

Audit and AU.2.041 Ensure that the Azure Defender for 1.0.2
Accountability actions of individual SQL should be
system users can be enabled for
uniquely traced to unprotected SQL
those users so they Managed Instances
can be held
accountable for their
actions.

Audit and AU.2.042 Create and retain Auditing on SQL 2.0.0


Accountability system audit logs server should be
and records to the enabled
extent needed to
enable the
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit and AU.2.042 Create and retain Azure Defender for 2.0.1
Accountability system audit logs SQL should be
and records to the enabled for
extent needed to unprotected Azure
enable the SQL servers
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.

Audit and AU.2.042 Create and retain Azure Defender for 1.0.2
Accountability system audit logs SQL should be
and records to the enabled for
extent needed to unprotected SQL
enable the Managed Instances
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.

Audit and AU.3.046 Alert in the event of Auditing on SQL 2.0.0


Accountability an audit logging server should be
process failure. enabled

Audit and AU.3.046 Alert in the event of Azure Defender for 2.0.1
Accountability an audit logging SQL should be
process failure. enabled for
unprotected Azure
SQL servers

Audit and AU.3.046 Alert in the event of Azure Defender for 1.0.2
Accountability an audit logging SQL should be
process failure. enabled for
unprotected SQL
Managed Instances

Security Assessment CA.2.158 Periodically assess Auditing on SQL 2.0.0


the security controls server should be
in organizational enabled
systems to determine
if the controls are
effective in their
application.

Security Assessment CA.2.158 Periodically assess Vulnerability 1.0.1


the security controls assessment should
in organizational be enabled on SQL
systems to determine Managed Instance
if the controls are
effective in their
application.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Security Assessment CA.2.158 Periodically assess Vulnerability 2.0.0


the security controls assessment should
in organizational be enabled on your
systems to determine SQL servers
if the controls are
effective in their
application.

Security Assessment CA.3.161 Monitor security Auditing on SQL 2.0.0


controls on an server should be
ongoing basis to enabled
ensure the continued
effectiveness of the
controls.

Security Assessment CA.3.161 Monitor security Vulnerability 1.0.1


controls on an assessment should
ongoing basis to be enabled on SQL
ensure the continued Managed Instance
effectiveness of the
controls.

Security Assessment CA.3.161 Monitor security Vulnerability 2.0.0


controls on an assessment should
ongoing basis to be enabled on your
ensure the continued SQL servers
effectiveness of the
controls.

Configuration CM.2.064 Establish and enforce Azure Defender for 2.0.1


Management security configuration SQL should be
settings for enabled for
information unprotected Azure
technology products SQL servers
employed in
organizational
systems.

Configuration CM.2.064 Establish and enforce Azure Defender for 1.0.2


Management security configuration SQL should be
settings for enabled for
information unprotected SQL
technology products Managed Instances
employed in
organizational
systems.

Configuration CM.3.068 Restrict, disable, or Public network access 1.1.0


Management prevent the use of on Azure SQL
nonessential Database should be
programs, functions, disabled
ports, protocols, and
services.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Incident Response IR.2.092 Establish an Vulnerability 2.0.0


operational incident- Assessment settings
handling capability for SQL server
for organizational should contain an
systems that includes email address to
preparation, receive scan reports
detection, analysis,
containment,
recovery, and user
response activities.

Recovery RE.2.137 Regularly perform Long-term geo- 2.0.0


and test data back- redundant backup
ups. should be enabled
for Azure SQL
Databases

Recovery RE.3.139 Regularly perform Long-term geo- 2.0.0


complete, redundant backup
comprehensive and should be enabled
resilient data backups for Azure SQL
as organizationally- Databases
defined.

Risk Assessment RM.2.141 Periodically assess Azure Defender for 2.0.1


the risk to SQL should be
organizational enabled for
operations (including unprotected Azure
mission, functions, SQL servers
image, or reputation),
organizational assets,
and individuals,
resulting from the
operation of
organizational
systems and the
associated
processing, storage,
or transmission of
CUI.

Risk Assessment RM.2.141 Periodically assess Azure Defender for 1.0.2


the risk to SQL should be
organizational enabled for
operations (including unprotected SQL
mission, functions, Managed Instances
image, or reputation),
organizational assets,
and individuals,
resulting from the
operation of
organizational
systems and the
associated
processing, storage,
or transmission of
CUI.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Risk Assessment RM.2.141 Periodically assess Vulnerability 2.0.0


the risk to Assessment settings
organizational for SQL server
operations (including should contain an
mission, functions, email address to
image, or reputation), receive scan reports
organizational assets,
and individuals,
resulting from the
operation of
organizational
systems and the
associated
processing, storage,
or transmission of
CUI.

Risk Assessment RM.2.141 Periodically assess Vulnerability 1.0.1


the risk to assessment should
organizational be enabled on SQL
operations (including Managed Instance
mission, functions,
image, or reputation),
organizational assets,
and individuals,
resulting from the
operation of
organizational
systems and the
associated
processing, storage,
or transmission of
CUI.

Risk Assessment RM.2.141 Periodically assess Vulnerability 2.0.0


the risk to assessment should
organizational be enabled on your
operations (including SQL servers
mission, functions,
image, or reputation),
organizational assets,
and individuals,
resulting from the
operation of
organizational
systems and the
associated
processing, storage,
or transmission of
CUI.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Risk Assessment RM.2.142 Scan for Azure Defender for 2.0.1


vulnerabilities in SQL should be
organizational enabled for
systems and unprotected Azure
applications SQL servers
periodically and when
new vulnerabilities
affecting those
systems and
applications are
identified.

Risk Assessment RM.2.142 Scan for Azure Defender for 1.0.2


vulnerabilities in SQL should be
organizational enabled for
systems and unprotected SQL
applications Managed Instances
periodically and when
new vulnerabilities
affecting those
systems and
applications are
identified.

Risk Assessment RM.2.142 Scan for Vulnerability 2.0.0


vulnerabilities in Assessment settings
organizational for SQL server
systems and should contain an
applications email address to
periodically and when receive scan reports
new vulnerabilities
affecting those
systems and
applications are
identified.

Risk Assessment RM.2.142 Scan for Vulnerability 1.0.1


vulnerabilities in assessment should
organizational be enabled on SQL
systems and Managed Instance
applications
periodically and when
new vulnerabilities
affecting those
systems and
applications are
identified.

Risk Assessment RM.2.142 Scan for Vulnerability 2.0.0


vulnerabilities in assessment should
organizational be enabled on your
systems and SQL servers
applications
periodically and when
new vulnerabilities
affecting those
systems and
applications are
identified.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Risk Assessment RM.2.143 Remediate Azure Defender for 2.0.1


vulnerabilities in SQL should be
accordance with risk enabled for
assessments. unprotected Azure
SQL servers

Risk Assessment RM.2.143 Remediate Azure Defender for 1.0.2


vulnerabilities in SQL should be
accordance with risk enabled for
assessments. unprotected SQL
Managed Instances

Risk Assessment RM.2.143 Remediate SQL databases 4.0.0


vulnerabilities in should have
accordance with risk vulnerability findings
assessments. resolved

Risk Assessment RM.2.143 Remediate Vulnerability 2.0.0


vulnerabilities in Assessment settings
accordance with risk for SQL server
assessments. should contain an
email address to
receive scan reports

Risk Assessment RM.2.143 Remediate Vulnerability 1.0.1


vulnerabilities in assessment should
accordance with risk be enabled on SQL
assessments. Managed Instance

Risk Assessment RM.2.143 Remediate Vulnerability 2.0.0


vulnerabilities in assessment should
accordance with risk be enabled on your
assessments. SQL servers

Risk Management RM.3.144 Periodically perform Vulnerability 2.0.0


risk assessments to Assessment settings
identify and prioritize for SQL server
risks according to the should contain an
defined risk email address to
categories, risk receive scan reports
sources and risk
measurement criteria.

System and SC.1.175 Monitor, control, and Public network access 1.1.0
Communications protect on Azure SQL
Protection communications (i.e., Database should be
information disabled
transmitted or
received by
organizational
systems) at the
external boundaries
and key internal
boundaries of
organizational
systems.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and SC.3.177 Employ FIPS- SQL managed 2.0.0


Communications validated instances should use
Protection cryptography when customer-managed
used to protect the keys to encrypt data
confidentiality of CUI. at rest

System and SC.3.177 Employ FIPS- SQL servers should 2.0.1


Communications validated use customer-
Protection cryptography when managed keys to
used to protect the encrypt data at rest
confidentiality of CUI.

System and SC.3.177 Employ FIPS- Transparent Data 2.0.0


Communications validated Encryption on SQL
Protection cryptography when databases should be
used to protect the enabled
confidentiality of CUI.

System and SC.3.181 Separate user An Azure Active 1.0.0


Communications functionality from Directory
Protection system management administrator should
functionality. be provisioned for
SQL servers

System and SC.3.183 Deny network Public network access 1.1.0


Communications communications on Azure SQL
Protection traffic by default and Database should be
allow network disabled
communications
traffic by exception
(i.e., deny all, permit
by exception).

System and SC.3.191 Protect the Azure Defender for 2.0.1


Communications confidentiality of CUI SQL should be
Protection at rest. enabled for
unprotected Azure
SQL servers

System and SC.3.191 Protect the Azure Defender for 1.0.2


Communications confidentiality of CUI SQL should be
Protection at rest. enabled for
unprotected SQL
Managed Instances

System and SC.3.191 Protect the Transparent Data 2.0.0


Communications confidentiality of CUI Encryption on SQL
Protection at rest. databases should be
enabled

System and SI.1.210 Identify, report, and SQL databases 4.0.0


Information Integrity correct information should have
and information vulnerability findings
system flaws in a resolved
timely manner.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and SI.2.216 Monitor Azure Defender for 2.0.1


Information Integrity organizational SQL should be
systems, including enabled for
inbound and unprotected Azure
outbound SQL servers
communications
traffic, to detect
attacks and
indicators of
potential attacks.

System and SI.2.216 Monitor Azure Defender for 1.0.2


Information Integrity organizational SQL should be
systems, including enabled for
inbound and unprotected SQL
outbound Managed Instances
communications
traffic, to detect
attacks and
indicators of
potential attacks.

System and SI.2.217 Identify unauthorized Azure Defender for 2.0.1


Information Integrity use of organizational SQL should be
systems. enabled for
unprotected Azure
SQL servers

System and SI.2.217 Identify unauthorized Azure Defender for 1.0.2


Information Integrity use of organizational SQL should be
systems. enabled for
unprotected SQL
Managed Instances

FedRAMP High
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - FedRAMP High. For more information about this compliance standard,
see FedRAMP High.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control AC-2 Account An Azure Active 1.0.0


Management Directory
administrator should
be provisioned for
SQL servers

Access Control AC-2 (1) Automated System An Azure Active 1.0.0


Account Directory
Management administrator should
be provisioned for
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Access Control AC-2 (7) Role-based Schemes An Azure Active 1.0.0


Directory
administrator should
be provisioned for
SQL servers

Access Control AC-2 (12) Account Monitoring / Azure Defender for 1.0.2
Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances

Access Control AC-3 Access Enforcement An Azure Active 1.0.0


Directory
administrator should
be provisioned for
SQL servers

Access Control AC-4 Information Flow Private endpoint 1.1.0


Enforcement connections on Azure
SQL Database should
be enabled

Access Control AC-4 Information Flow Public network access 1.1.0


Enforcement on Azure SQL
Database should be
disabled

Access Control AC-17 Remote Access Private endpoint 1.1.0


connections on Azure
SQL Database should
be enabled

Access Control AC-17 (1) Automated Private endpoint 1.1.0


Monitoring / Control connections on Azure
SQL Database should
be enabled

Audit and AU-6 Audit Review, Azure Defender for 2.0.1


Accountability Analysis, and SQL should be
Reporting enabled for
unprotected Azure
SQL servers

Audit and AU-6 Audit Review, Azure Defender for 1.0.2


Accountability Analysis, and SQL should be
Reporting enabled for
unprotected SQL
Managed Instances

Audit and AU-6 (4) Central Review and Auditing on SQL 2.0.0
Accountability Analysis server should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit and AU-6 (4) Central Review and Azure Defender for 2.0.1
Accountability Analysis SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-6 (4) Central Review and Azure Defender for 1.0.2
Accountability Analysis SQL should be
enabled for
unprotected SQL
Managed Instances

Audit and AU-6 (5) Integration / Auditing on SQL 2.0.0


Accountability Scanning and server should be
Monitoring enabled
Capabilities

Audit and AU-6 (5) Integration / Azure Defender for 2.0.1


Accountability Scanning and SQL should be
Monitoring enabled for
Capabilities unprotected Azure
SQL servers

Audit and AU-6 (5) Integration / Azure Defender for 1.0.2


Accountability Scanning and SQL should be
Monitoring enabled for
Capabilities unprotected SQL
Managed Instances

Audit and AU-11 Audit Record SQL servers with 3.0.0


Accountability Retention auditing to storage
account destination
should be configured
with 90 days
retention or higher

Audit and AU-12 Audit Generation Auditing on SQL 2.0.0


Accountability server should be
enabled

Audit and AU-12 Audit Generation Azure Defender for 2.0.1


Accountability SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-12 Audit Generation Azure Defender for 1.0.2


Accountability SQL should be
enabled for
unprotected SQL
Managed Instances

Audit and AU-12 (1) System-wide / Time- Auditing on SQL 2.0.0


Accountability correlated Audit Trail server should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit and AU-12 (1) System-wide / Time- Azure Defender for 2.0.1
Accountability correlated Audit Trail SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-12 (1) System-wide / Time- Azure Defender for 1.0.2
Accountability correlated Audit Trail SQL should be
enabled for
unprotected SQL
Managed Instances

Contingency CP-6 Alternate Storage Site Long-term geo- 2.0.0


Planning redundant backup
should be enabled
for Azure SQL
Databases

Contingency CP-6 (1) Separation from Long-term geo- 2.0.0


Planning Primary Site redundant backup
should be enabled
for Azure SQL
Databases

Identification and IA-2 Identification and An Azure Active 1.0.0


Authentication Authentication Directory
(organizational Users) administrator should
be provisioned for
SQL servers

Identification and IA-4 Identifier An Azure Active 1.0.0


Authentication Management Directory
administrator should
be provisioned for
SQL servers

Incident Response IR-4 Incident Handling Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers

Incident Response IR-4 Incident Handling Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Incident Response IR-5 Incident Monitoring Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Incident Response IR-5 Incident Monitoring Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability Azure Defender for 2.0.1


Scanning SQL should be
enabled for
unprotected Azure
SQL servers

Risk Assessment RA-5 Vulnerability Azure Defender for 1.0.2


Scanning SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability SQL databases 4.0.0


Scanning should have
vulnerability findings
resolved

Risk Assessment RA-5 Vulnerability Vulnerability 1.0.1


Scanning assessment should
be enabled on SQL
Managed Instance

Risk Assessment RA-5 Vulnerability Vulnerability 2.0.0


Scanning assessment should
be enabled on your
SQL servers

System and SC-7 Boundary Protection Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 Boundary Protection Public network access 1.1.0


Communications on Azure SQL
Protection Database should be
disabled

System and SC-7 (3) Access Points Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and SC-12 Cryptographic Key SQL managed 2.0.0


Communications Establishment and instances should use
Protection Management customer-managed
keys to encrypt data
at rest

System and SC-12 Cryptographic Key SQL servers should 2.0.1


Communications Establishment and use customer-
Protection Management managed keys to
encrypt data at rest

System and SC-28 Protection of Transparent Data 2.0.0


Communications Information at Rest Encryption on SQL
Protection databases should be
enabled

System and SC-28 (1) Cryptographic Transparent Data 2.0.0


Communications Protection Encryption on SQL
Protection databases should be
enabled

System and SI-2 Flaw Remediation SQL databases 4.0.0


Information Integrity should have
vulnerability findings
resolved

System and SI-4 Information System Azure Defender for 2.0.1


Information Integrity Monitoring SQL should be
enabled for
unprotected Azure
SQL servers

System and SI-4 Information System Azure Defender for 1.0.2


Information Integrity Monitoring SQL should be
enabled for
unprotected SQL
Managed Instances

FedRAMP Moderate
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - FedRAMP Moderate. For more information about this compliance
standard, see FedRAMP Moderate.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control AC-2 Account An Azure Active 1.0.0


Management Directory
administrator should
be provisioned for
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Access Control AC-2 (1) Automated System An Azure Active 1.0.0


Account Directory
Management administrator should
be provisioned for
SQL servers

Access Control AC-2 (7) Role-based Schemes An Azure Active 1.0.0


Directory
administrator should
be provisioned for
SQL servers

Access Control AC-2 (12) Account Monitoring / Azure Defender for 1.0.2
Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances

Access Control AC-3 Access Enforcement An Azure Active 1.0.0


Directory
administrator should
be provisioned for
SQL servers

Access Control AC-4 Information Flow Private endpoint 1.1.0


Enforcement connections on Azure
SQL Database should
be enabled

Access Control AC-4 Information Flow Public network access 1.1.0


Enforcement on Azure SQL
Database should be
disabled

Access Control AC-17 Remote Access Private endpoint 1.1.0


connections on Azure
SQL Database should
be enabled

Access Control AC-17 (1) Automated Private endpoint 1.1.0


Monitoring / Control connections on Azure
SQL Database should
be enabled

Audit and AU-6 Audit Review, Azure Defender for 2.0.1


Accountability Analysis, and SQL should be
Reporting enabled for
unprotected Azure
SQL servers

Audit and AU-6 Audit Review, Azure Defender for 1.0.2


Accountability Analysis, and SQL should be
Reporting enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit and AU-11 Audit Record SQL servers with 3.0.0


Accountability Retention auditing to storage
account destination
should be configured
with 90 days
retention or higher

Audit and AU-12 Audit Generation Auditing on SQL 2.0.0


Accountability server should be
enabled

Audit and AU-12 Audit Generation Azure Defender for 2.0.1


Accountability SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-12 Audit Generation Azure Defender for 1.0.2


Accountability SQL should be
enabled for
unprotected SQL
Managed Instances

Contingency CP-6 Alternate Storage Site Long-term geo- 2.0.0


Planning redundant backup
should be enabled
for Azure SQL
Databases

Contingency CP-6 (1) Separation from Long-term geo- 2.0.0


Planning Primary Site redundant backup
should be enabled
for Azure SQL
Databases

Identification and IA-2 Identification and An Azure Active 1.0.0


Authentication Authentication Directory
(organizational Users) administrator should
be provisioned for
SQL servers

Identification and IA-4 Identifier An Azure Active 1.0.0


Authentication Management Directory
administrator should
be provisioned for
SQL servers

Incident Response IR-4 Incident Handling Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Incident Response IR-4 Incident Handling Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Incident Response IR-5 Incident Monitoring Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers

Incident Response IR-5 Incident Monitoring Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability Azure Defender for 2.0.1


Scanning SQL should be
enabled for
unprotected Azure
SQL servers

Risk Assessment RA-5 Vulnerability Azure Defender for 1.0.2


Scanning SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability SQL databases 4.0.0


Scanning should have
vulnerability findings
resolved

Risk Assessment RA-5 Vulnerability Vulnerability 1.0.1


Scanning assessment should
be enabled on SQL
Managed Instance

Risk Assessment RA-5 Vulnerability Vulnerability 2.0.0


Scanning assessment should
be enabled on your
SQL servers

System and SC-7 Boundary Protection Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 Boundary Protection Public network access 1.1.0


Communications on Azure SQL
Protection Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and SC-7 (3) Access Points Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled

System and SC-12 Cryptographic Key SQL managed 2.0.0


Communications Establishment and instances should use
Protection Management customer-managed
keys to encrypt data
at rest

System and SC-12 Cryptographic Key SQL servers should 2.0.1


Communications Establishment and use customer-
Protection Management managed keys to
encrypt data at rest

System and SC-28 Protection of Transparent Data 2.0.0


Communications Information at Rest Encryption on SQL
Protection databases should be
enabled

System and SC-28 (1) Cryptographic Transparent Data 2.0.0


Communications Protection Encryption on SQL
Protection databases should be
enabled

System and SI-2 Flaw Remediation SQL databases 4.0.0


Information Integrity should have
vulnerability findings
resolved

System and SI-4 Information System Azure Defender for 2.0.1


Information Integrity Monitoring SQL should be
enabled for
unprotected Azure
SQL servers

System and SI-4 Information System Azure Defender for 1.0.2


Information Integrity Monitoring SQL should be
enabled for
unprotected SQL
Managed Instances

HIPAA HITRUST 9.2


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - HIPAA HITRUST 9.2. For more information about this compliance
standard, see HIPAA HITRUST 9.2.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Segregation in 0805.01m1Organizat The organization's SQL Server should 1.0.0


Networks ional.12 - 01.m security gateways use a virtual network
(e.g. firewalls) enforce service endpoint
security policies and
are configured to
filter traffic between
domains, block
unauthorized access,
and are used to
maintain segregation
between internal
wired, internal
wireless, and external
network segments
(e.g., the Internet)
including DMZs and
enforce access
control policies for
each of the domains.

Segregation in 0806.01m2Organizat The organizations SQL Server should 1.0.0


Networks ional.12356 - 01.m network is logically use a virtual network
and physically service endpoint
segmented with a
defined security
perimeter and a
graduated set of
controls, including
subnetworks for
publicly accessible
system components
that are logically
separated from the
internal network,
based on
organizational
requirements; and
traffic is controlled
based on
functionality required
and classification of
the data/systems
based on a risk
assessment and their
respective security
requirements.

Segregation in 0894.01m2Organizat Networks are SQL Server should 1.0.0


Networks ional.7 - 01.m segregated from use a virtual network
production-level service endpoint
networks when
migrating physical
servers, applications
or data to virtualized
servers.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit Logging 1211.09aa3System.4 The organization Auditing on SQL 2.0.0


- 09.aa verifies every ninety server should be
(90) days for each enabled
extract of covered
information recorded
that the data is
erased or its use is
still required.

Back-up 1616.09l1Organizati Backup copies of Long-term geo- 2.0.0


onal.16 - 09.l information and redundant backup
software are made should be enabled
and tests of the for Azure SQL
media and Databases
restoration
procedures are
regularly performed
at appropriate
intervals.

Back-up 1621.09l2Organizati Automated tools are Long-term geo- 2.0.0


onal.1 - 09.l used to track all redundant backup
backups. should be enabled
for Azure SQL
Databases

Network Controls 0862.09m2Organizat The organization SQL Server should 1.0.0


ional.8 - 09.m ensures information use a virtual network
systems protect the service endpoint
confidentiality and
integrity of
transmitted
information,
including during
preparation for
transmission and
during reception.

Management of 0301.09o1Organizati The organization, Transparent Data 2.0.0


Removable Media onal.123 - 09.o based on the data Encryption on SQL
classification level, databases should be
registers media enabled
(including laptops)
prior to use, places
reasonable
restrictions on how
such media be used,
and provides an
appropriate level of
physical and logical
protection (including
encryption) for media
containing covered
information until
properly destroyed
or sanitized.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Management of 0304.09o3Organizati The organization SQL managed 2.0.0


Removable Media onal.1 - 09.o restricts the use of instances should use
writable removable customer-managed
media and keys to encrypt data
personally-owned at rest
removable media in
organizational
systems.

Management of 0304.09o3Organizati The organization SQL servers should 2.0.1


Removable Media onal.1 - 09.o restricts the use of use customer-
writable removable managed keys to
media and encrypt data at rest
personally-owned
removable media in
organizational
systems.

Control of Technical 0709.10m1Organizat Technical SQL databases 4.0.0


Vulnerabilities ional.1 - 10.m vulnerabilities are should have
identified, evaluated vulnerability findings
for risk and corrected resolved
in a timely manner.

Control of Technical 0709.10m1Organizat Technical Vulnerability 1.0.1


Vulnerabilities ional.1 - 10.m vulnerabilities are assessment should
identified, evaluated be enabled on SQL
for risk and corrected Managed Instance
in a timely manner.

Control of Technical 0709.10m1Organizat Technical Vulnerability 2.0.0


Vulnerabilities ional.1 - 10.m vulnerabilities are assessment should
identified, evaluated be enabled on your
for risk and corrected SQL servers
in a timely manner.

Control of Technical 0710.10m2Organizat A hardened Vulnerability 1.0.1


Vulnerabilities ional.1 - 10.m configuration assessment should
standard exists for all be enabled on SQL
system and network Managed Instance
components.

Control of Technical 0716.10m3Organizat The organization SQL databases 4.0.0


Vulnerabilities ional.1 - 10.m conducts an should have
enterprise security vulnerability findings
posture review as resolved
needed but no less
than once within
every three-
hundred-sixty-five
(365) days, in
accordance with
organizational IS
procedures.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Control of Technical 0719.10m3Organizat The organization Vulnerability 1.0.1


Vulnerabilities ional.5 - 10.m updates the list of assessment should
information system be enabled on SQL
vulnerabilities Managed Instance
scanned within every
thirty (30) days or
when new
vulnerabilities are
identified and
reported.

IRS 1075 September 2016


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - IRS 1075 September 2016. For more information about this compliance
standard, see IRS 1075 September 2016.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control 9.3.1.2 Account An Azure Active 1.0.0


Management (AC-2) Directory
administrator should
be provisioned for
SQL servers

Risk Assessment 9.3.14.3 Vulnerability Azure Defender for 2.0.1


Scanning (RA-5) SQL should be
enabled for
unprotected Azure
SQL servers

Risk Assessment 9.3.14.3 Vulnerability Azure Defender for 1.0.2


Scanning (RA-5) SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment 9.3.14.3 Vulnerability SQL databases 4.0.0


Scanning (RA-5) should have
vulnerability findings
resolved

System and 9.3.16.15 Protection of Azure Defender for 2.0.1


Communications Information at Rest SQL should be
Protection (SC-28) enabled for
unprotected Azure
SQL servers

System and 9.3.16.15 Protection of Azure Defender for 1.0.2


Communications Information at Rest SQL should be
Protection (SC-28) enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and 9.3.16.15 Protection of Transparent Data 2.0.0


Communications Information at Rest Encryption on SQL
Protection (SC-28) databases should be
enabled

System and 9.3.17.2 Flaw Remediation (SI- SQL databases 4.0.0


Information Integrity 2) should have
vulnerability findings
resolved

System and 9.3.17.4 Information System Azure Defender for 2.0.1


Information Integrity Monitoring (SI-4) SQL should be
enabled for
unprotected Azure
SQL servers

System and 9.3.17.4 Information System Azure Defender for 1.0.2


Information Integrity Monitoring (SI-4) SQL should be
enabled for
unprotected SQL
Managed Instances

Awareness and 9.3.3.11 Audit Generation Auditing on SQL 2.0.0


Training (AU-12) server should be
enabled

Awareness and 9.3.3.11 Audit Generation Azure Defender for 2.0.1


Training (AU-12) SQL should be
enabled for
unprotected Azure
SQL servers

Awareness and 9.3.3.11 Audit Generation Azure Defender for 1.0.2


Training (AU-12) SQL should be
enabled for
unprotected SQL
Managed Instances

Awareness and 9.3.3.5 Response to Audit Auditing on SQL 2.0.0


Training Processing Failures server should be
(AU-5) enabled

Awareness and 9.3.3.5 Response to Audit Azure Defender for 2.0.1


Training Processing Failures SQL should be
(AU-5) enabled for
unprotected Azure
SQL servers

Awareness and 9.3.3.5 Response to Audit Azure Defender for 1.0.2


Training Processing Failures SQL should be
(AU-5) enabled for
unprotected SQL
Managed Instances

ISO 27001:2013
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - ISO 27001:2013. For more information about this compliance standard,
see ISO 27001:2013.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Cryptography 10.1.1 Policy on the use of Transparent Data 2.0.0


cryptographic Encryption on SQL
controls databases should be
enabled

Operations security 12.4.1 Event Logging Auditing on SQL 2.0.0


server should be
enabled

Operations security 12.4.3 Administrator and Auditing on SQL 2.0.0


operator logs server should be
enabled

Operations security 12.4.4 Clock Auditing on SQL 2.0.0


Synchronization server should be
enabled

Operations security 12.6.1 Management of SQL databases 4.0.0


technical should have
vulnerabilities vulnerability findings
resolved

Asset management 8.2.1 Classification of SQL databases 4.0.0


information should have
vulnerability findings
resolved

Access control 9.2.3 Management of An Azure Active 1.0.0


privileged access Directory
rights administrator should
be provisioned for
SQL servers

New Zealand ISM Restricted


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - New Zealand ISM Restricted. For more information about this compliance
standard, see New Zealand ISM Restricted.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Information security ISM-3 6.2.5 Conducting Vulnerability 1.0.1


monitoring vulnerability assessment should
assessments be enabled on SQL
Managed Instance
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Information security ISM-3 6.2.5 Conducting Vulnerability 2.0.0


monitoring vulnerability assessment should
assessments be enabled on your
SQL servers

Information security ISM-4 6.2.6 Resolving SQL databases 4.0.0


monitoring vulnerabilities should have
vulnerability findings
resolved

Information security ISM-4 6.2.6 Resolving Vulnerability 2.0.0


monitoring vulnerabilities Assessment settings
for SQL server
should contain an
email address to
receive scan reports

Infrastructure INF-9 10.8.35 Security Private endpoint 1.1.0


Architecture connections on Azure
SQL Database should
be enabled

Access Control and AC-11 16.4.30 Privileged An Azure Active 1.0.0


Passwords Access Management Directory
administrator should
be provisioned for
SQL servers

Access Control and AC-17 16.6.9 Events to be Auditing on SQL 2.0.0


Passwords logged server should be
enabled

Cryptography CR-3 17.1.46 Reducing SQL managed 2.0.0


storage and physical instances should use
transfer requirements customer-managed
keys to encrypt data
at rest

Cryptography CR-3 17.1.46 Reducing SQL servers should 2.0.1


storage and physical use customer-
transfer requirements managed keys to
encrypt data at rest

Cryptography CR-3 17.1.46 Reducing Transparent Data 2.0.0


storage and physical Encryption on SQL
transfer requirements databases should be
enabled

Gateway security GS-2 19.1.11 Using Public network access 1.1.0


Gateways on Azure SQL
Database should be
disabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Data management DM-6 20.4.4 Database files Azure Defender for 2.0.1
SQL should be
enabled for
unprotected Azure
SQL servers

Data management DM-6 20.4.4 Database files Azure Defender for 1.0.2
SQL should be
enabled for
unprotected SQL
Managed Instances

NIST SP 800-53 Rev. 5


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - NIST SP 800-53 Rev. 5. For more information about this compliance
standard, see NIST SP 800-53 Rev. 5.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Access Control AC-2 Account An Azure Active 1.0.0


Management Directory
administrator should
be provisioned for
SQL servers

Access Control AC-2 (1) Automated System An Azure Active 1.0.0


Account Directory
Management administrator should
be provisioned for
SQL servers

Access Control AC-2 (7) Privileged User An Azure Active 1.0.0


Accounts Directory
administrator should
be provisioned for
SQL servers

Access Control AC-2 (12) Account Monitoring Azure Defender for 1.0.2
for Atypical Usage SQL should be
enabled for
unprotected SQL
Managed Instances

Access Control AC-3 Access Enforcement An Azure Active 1.0.0


Directory
administrator should
be provisioned for
SQL servers

Access Control AC-4 Information Flow Private endpoint 1.1.0


Enforcement connections on Azure
SQL Database should
be enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Access Control AC-4 Information Flow Public network access 1.1.0


Enforcement on Azure SQL
Database should be
disabled

Access Control AC-16 Security and Privacy Azure Defender for 2.0.1
Attributes SQL should be
enabled for
unprotected Azure
SQL servers

Access Control AC-16 Security and Privacy Azure Defender for 1.0.2
Attributes SQL should be
enabled for
unprotected SQL
Managed Instances

Access Control AC-17 Remote Access Private endpoint 1.1.0


connections on Azure
SQL Database should
be enabled

Access Control AC-17 (1) Monitoring and Private endpoint 1.1.0


Control connections on Azure
SQL Database should
be enabled

Audit and AU-6 Audit Record Review, Azure Defender for 2.0.1
Accountability Analysis, and SQL should be
Reporting enabled for
unprotected Azure
SQL servers

Audit and AU-6 Audit Record Review, Azure Defender for 1.0.2
Accountability Analysis, and SQL should be
Reporting enabled for
unprotected SQL
Managed Instances

Audit and AU-6 (4) Central Review and Auditing on SQL 2.0.0
Accountability Analysis server should be
enabled

Audit and AU-6 (4) Central Review and Azure Defender for 2.0.1
Accountability Analysis SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-6 (4) Central Review and Azure Defender for 1.0.2
Accountability Analysis SQL should be
enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit and AU-6 (5) Integrated Analysis Auditing on SQL 2.0.0


Accountability of Audit Records server should be
enabled

Audit and AU-6 (5) Integrated Analysis Azure Defender for 2.0.1
Accountability of Audit Records SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-6 (5) Integrated Analysis Azure Defender for 1.0.2
Accountability of Audit Records SQL should be
enabled for
unprotected SQL
Managed Instances

Audit and AU-11 Audit Record SQL servers with 3.0.0


Accountability Retention auditing to storage
account destination
should be configured
with 90 days
retention or higher

Audit and AU-12 Audit Record Auditing on SQL 2.0.0


Accountability Generation server should be
enabled

Audit and AU-12 Audit Record Azure Defender for 2.0.1


Accountability Generation SQL should be
enabled for
unprotected Azure
SQL servers

Audit and AU-12 Audit Record Azure Defender for 1.0.2


Accountability Generation SQL should be
enabled for
unprotected SQL
Managed Instances

Audit and AU-12 (1) System-wide and Auditing on SQL 2.0.0


Accountability Time-correlated server should be
Audit Trail enabled

Audit and AU-12 (1) System-wide and Azure Defender for 2.0.1
Accountability Time-correlated SQL should be
Audit Trail enabled for
unprotected Azure
SQL servers

Audit and AU-12 (1) System-wide and Azure Defender for 1.0.2
Accountability Time-correlated SQL should be
Audit Trail enabled for
unprotected SQL
Managed Instances
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Contingency CP-6 Alternate Storage Site Long-term geo- 2.0.0


Planning redundant backup
should be enabled
for Azure SQL
Databases

Contingency CP-6 (1) Separation from Long-term geo- 2.0.0


Planning Primary Site redundant backup
should be enabled
for Azure SQL
Databases

Identification and IA-2 Identification and An Azure Active 1.0.0


Authentication Authentication Directory
(organizational Users) administrator should
be provisioned for
SQL servers

Identification and IA-4 Identifier An Azure Active 1.0.0


Authentication Management Directory
administrator should
be provisioned for
SQL servers

Incident Response IR-4 Incident Handling Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers

Incident Response IR-4 Incident Handling Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Incident Response IR-5 Incident Monitoring Azure Defender for 2.0.1


SQL should be
enabled for
unprotected Azure
SQL servers

Incident Response IR-5 Incident Monitoring Azure Defender for 1.0.2


SQL should be
enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability Azure Defender for 2.0.1


Monitoring and SQL should be
Scanning enabled for
unprotected Azure
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Risk Assessment RA-5 Vulnerability Azure Defender for 1.0.2


Monitoring and SQL should be
Scanning enabled for
unprotected SQL
Managed Instances

Risk Assessment RA-5 Vulnerability SQL databases 4.0.0


Monitoring and should have
Scanning vulnerability findings
resolved

Risk Assessment RA-5 Vulnerability Vulnerability 1.0.1


Monitoring and assessment should
Scanning be enabled on SQL
Managed Instance

Risk Assessment RA-5 Vulnerability Vulnerability 2.0.0


Monitoring and assessment should
Scanning be enabled on your
SQL servers

System and SC-7 Boundary Protection Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 Boundary Protection Public network access 1.1.0


Communications on Azure SQL
Protection Database should be
disabled

System and SC-7 (3) Access Points Private endpoint 1.1.0


Communications connections on Azure
Protection SQL Database should
be enabled

System and SC-7 (3) Access Points Public network access 1.1.0
Communications on Azure SQL
Protection Database should be
disabled

System and SC-12 Cryptographic Key SQL managed 2.0.0


Communications Establishment and instances should use
Protection Management customer-managed
keys to encrypt data
at rest

System and SC-12 Cryptographic Key SQL servers should 2.0.1


Communications Establishment and use customer-
Protection Management managed keys to
encrypt data at rest

System and SC-28 Protection of Transparent Data 2.0.0


Communications Information at Rest Encryption on SQL
Protection databases should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

System and SC-28 (1) Cryptographic Transparent Data 2.0.0


Communications Protection Encryption on SQL
Protection databases should be
enabled

System and SI-2 Flaw Remediation SQL databases 4.0.0


Information Integrity should have
vulnerability findings
resolved

System and SI-4 System Monitoring Azure Defender for 2.0.1


Information Integrity SQL should be
enabled for
unprotected Azure
SQL servers

System and SI-4 System Monitoring Azure Defender for 1.0.2


Information Integrity SQL should be
enabled for
unprotected SQL
Managed Instances

PCI DSS 3.2.1


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see PCI
DSS 3.2.1. For more information about this compliance standard, see PCI DSS 3.2.1.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Requirement 1 PCI DSS v3.2.1 1.3.4 PCI DSS requirement Auditing on SQL 2.0.0
1.3.4 server should be
enabled

Requirement 10 PCI DSS v3.2.1 10.5.4 PCI DSS requirement Auditing on SQL 2.0.0
10.5.4 server should be
enabled

Requirement 11 PCI DSS v3.2.1 11.2.1 PCI DSS requirement SQL databases 4.0.0
11.2.1 should have
vulnerability findings
resolved

Requirement 3 PCI DSS v3.2.1 3.2 PCI DSS requirement An Azure Active 1.0.0
3.2 Directory
administrator should
be provisioned for
SQL servers

Requirement 3 PCI DSS v3.2.1 3.4 PCI DSS requirement Transparent Data 2.0.0
3.4 Encryption on SQL
databases should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Requirement 4 PCI DSS v3.2.1 4.1 PCI DSS requirement Transparent Data 2.0.0
4.1 Encryption on SQL
databases should be
enabled

Requirement 5 PCI DSS v3.2.1 5.1 PCI DSS requirement SQL databases 4.0.0
5.1 should have
vulnerability findings
resolved

Requirement 6 PCI DSS v3.2.1 6.2 PCI DSS requirement SQL databases 4.0.0
6.2 should have
vulnerability findings
resolved

Requirement 6 PCI DSS v3.2.1 6.5.3 PCI DSS requirement Transparent Data 2.0.0
6.5.3 Encryption on SQL
databases should be
enabled

Requirement 6 PCI DSS v3.2.1 6.6 PCI DSS requirement SQL databases 4.0.0
6.6 should have
vulnerability findings
resolved

Requirement 7 PCI DSS v3.2.1 7.2.1 PCI DSS requirement An Azure Active 1.0.0
7.2.1 Directory
administrator should
be provisioned for
SQL servers

Requirement 8 PCI DSS v3.2.1 8.3.1 PCI DSS requirement An Azure Active 1.0.0
8.3.1 Directory
administrator should
be provisioned for
SQL servers

RMIT Malaysia
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - RMIT Malaysia. For more information about this compliance standard, see
RMIT Malaysia.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Cryptography RMiT 10.16 Cryptography - SQL managed 2.0.0


10.16 instances should use
customer-managed
keys to encrypt data
at rest
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Cryptography RMiT 10.16 Cryptography - Transparent Data 2.0.0


10.16 Encryption on SQL
databases should be
enabled

Cryptography RMiT 10.19 Cryptography - SQL servers should 2.0.1


10.19 use customer-
managed keys to
encrypt data at rest

Network Resilience RMiT 10.33 Network Resilience - Configure Azure SQL 1.0.0
10.33 Server to disable
public network access

Network Resilience RMiT 10.33 Network Resilience - Configure Azure SQL 1.0.0
10.33 Server to enable
private endpoint
connections

Network Resilience RMiT 10.33 Network Resilience - Private endpoint 1.1.0


10.33 connections on Azure
SQL Database should
be enabled

Network Resilience RMiT 10.39 Network Resilience - SQL Server should 1.0.0
10.39 use a virtual network
service endpoint

Cloud Services RMiT 10.49 Cloud Services - SQL Database should 2.0.0
10.49 avoid using GRS
backup redundancy

Cloud Services RMiT 10.49 Cloud Services - SQL Managed 2.0.0


10.49 Instances should
avoid using GRS
backup redundancy

Cloud Services RMiT 10.51 Cloud Services - Long-term geo- 2.0.0


10.51 redundant backup
should be enabled
for Azure SQL
Databases

Cloud Services RMiT 10.53 Cloud Services - SQL servers should 2.0.1
10.53 use customer-
managed keys to
encrypt data at rest

Access Control RMiT 10.54 Access Control - An Azure Active 1.0.0


10.54 Directory
administrator should
be provisioned for
SQL servers
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Security of Digital RMiT 10.66 Security of Digital Deploy - Configure 3.0.0


Services Services - 10.66 diagnostic settings
for SQL Databases to
Log Analytics
workspace

Data Loss Prevention RMiT 11.15 Data Loss Prevention Configure Azure SQL 1.0.0
(DLP) (DLP) - 11.15 Server to disable
public network access

Data Loss Prevention RMiT 11.15 Data Loss Prevention SQL managed 2.0.0
(DLP) (DLP) - 11.15 instances should use
customer-managed
keys to encrypt data
at rest

Data Loss Prevention RMiT 11.15 Data Loss Prevention Transparent Data 2.0.0
(DLP) (DLP) - 11.15 Encryption on SQL
databases should be
enabled

Security Operations RMiT 11.17 Security Operations Vulnerability 2.0.0


Centre (SOC) Centre (SOC) - 11.17 Assessment settings
for SQL server
should contain an
email address to
receive scan reports

Security Operations RMiT 11.17 Security Operations Vulnerability 2.0.0


Centre (SOC) Centre (SOC) - 11.17 Assessment settings
for SQL server
should contain an
email address to
receive scan reports

Security Operations RMiT 11.18 Security Operations Auditing on SQL 2.0.0


Centre (SOC) Centre (SOC) - 11.18 server should be
enabled

Security Operations RMiT 11.18 Security Operations Auditing on SQL 2.0.0


Centre (SOC) Centre (SOC) - 11.18 server should be
enabled

Security Operations RMiT 11.18 Security Operations SQL Auditing 1.0.0


Centre (SOC) Centre (SOC) - 11.18 settings should have
Action-Groups
configured to capture
critical activities

Security Operations RMiT 11.18 Security Operations SQL Auditing 1.0.0


Centre (SOC) Centre (SOC) - 11.18 settings should have
Action-Groups
configured to capture
critical activities
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Cybersecurity RMiT 11.8 Cybersecurity Vulnerability 1.0.1


Operations Operations - 11.8 assessment should
be enabled on SQL
Managed Instance

Cybersecurity RMiT 11.8 Cybersecurity Vulnerability 2.0.0


Operations Operations - 11.8 assessment should
be enabled on your
SQL servers

Control Measures on RMiT Appendix 5.6 Control Measures on Azure SQL Database 2.0.0
Cybersecurity Cybersecurity - should be running
Appendix 5.6 TLS version 1.2 or
newer

Control Measures on RMiT Appendix 5.6 Control Measures on Public network access 1.1.0
Cybersecurity Cybersecurity - on Azure SQL
Appendix 5.6 Database should be
disabled

Control Measures on RMiT Appendix 5.6 Control Measures on SQL Managed 1.0.1
Cybersecurity Cybersecurity - Instance should have
Appendix 5.6 the minimal TLS
version of 1.2

Control Measures on RMiT Appendix 5.6 Control Measures on Virtual network 1.0.0
Cybersecurity Cybersecurity - firewall rule on Azure
Appendix 5.6 SQL Database should
be enabled to allow
traffic from the
specified subnet

Control Measures on RMiT Appendix 5.7 Control Measures on Configure Azure SQL 1.0.0
Cybersecurity Cybersecurity - Server to enable
Appendix 5.7 private endpoint
connections

UK OFFICIAL and UK NHS


To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see
Azure Policy Regulatory Compliance - UK OFFICIAL and UK NHS. For more information about this compliance
standard, see UK OFFICIAL.

P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)

Identity and 10 Identity and An Azure Active 1.0.0


authentication authentication Directory
administrator should
be provisioned for
SQL servers

Audit information for 13 Audit information for Auditing on SQL 2.0.0


users users server should be
enabled
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E

Audit information for 13 Audit information for Azure Defender for 2.0.1
users users SQL should be
enabled for
unprotected Azure
SQL servers

Asset protection and 2.3 Data at rest Transparent Data 2.0.0


resilience protection Encryption on SQL
databases should be
enabled

Operational security 5.2 Vulnerability Azure Defender for 2.0.1


management SQL should be
enabled for
unprotected Azure
SQL servers

Operational security 5.2 Vulnerability Azure Defender for 1.0.2


management SQL should be
enabled for
unprotected SQL
Managed Instances

Operational security 5.2 Vulnerability SQL databases 4.0.0


management should have
vulnerability findings
resolved

Operational security 5.2 Vulnerability Vulnerability 1.0.1


management assessment should
be enabled on SQL
Managed Instance

Operational security 5.2 Vulnerability Vulnerability 2.0.0


management assessment should
be enabled on your
SQL servers

Next steps
Learn more about Azure Policy Regulatory Compliance.
See the built-ins on the Azure Policy GitHub repo.
Microsoft Defender for SQL
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Microsoft Defender for SQL is a Defender plan in Microsoft Defender for Cloud. Microsoft Defender for SQL
includes functionality for surfacing and mitigating potential database vulnerabilities, and detecting anomalous
activities that could indicate a threat to your database. It provides a single go-to location for enabling and
managing these capabilities.

What are the benefits of Microsoft Defender for SQL?


Microsoft Defender for SQL provides a set of advanced SQL security capabilities, including SQL Vulnerability
Assessment and Advanced Threat Protection.
Vulnerability Assessment is an easy-to-configure service that can discover, track, and help you remediate
potential database vulnerabilities. It provides visibility into your security state, and it includes actionable steps
to resolve security issues and enhance your database fortifications.
Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts
to access or exploit your database. It continuously monitors your database for suspicious activities, and it
provides immediate security alerts on potential vulnerabilities, Azure SQL injection attacks, and anomalous
database access patterns. Advanced Threat Protection alerts provide details of the suspicious activity and
recommend action on how to investigate and mitigate the threat.
Enable Microsoft Defender for SQL once to enable all these included features. With one click, you can enable
Microsoft Defender for all databases on your server in Azure or in your SQL Managed Instance. Enabling or
managing Microsoft Defender for SQL settings requires belonging to the SQL security manager role, or one of
the database or server admin roles.
For more information about Microsoft Defender for SQL pricing, see the Microsoft Defender for Cloud pricing
page.

Enable Microsoft Defender for SQL


There are multiple ways to enable Microsoft Defender plans. You can enable it at the subscription level
(recommended ) either:
In Microsoft Defender for Cloud in the Azure portal
Programmatically with the REST API, Azure CLI, PowerShell, or Azure Policy
Alternatively, you can enable it at the resource level as described in Enable Microsoft Defender for Azure SQL
Database at the resource level.
When you enable on the subscription level, all databases in Azure SQL Database and Azure SQL Managed
Instance are protected. You can then disable them individually if you choose. If you want to manually manage
which databases are protected, disable at the subscription level and enable each database that you want
protected.
Enable Microsoft Defender for Azure SQL Database at the subscription level in Microsoft Defender for
Cloud
To enable Microsoft Defender for Azure SQL Database at the subscription level from within Microsoft Defender
for Cloud:
1. From the Azure portal, open Defender for Cloud .
2. From Defender for Cloud's menu, select Environment Settings .
3. Select the relevant subscription.
4. Change the plan setting to On .

5. Select Save .
Enable Microsoft Defender plans programatically
The flexibility of Azure allows for a number of programmatic methods for enabling Microsoft Defender plans.
Use any of the following tools to enable Microsoft Defender for your subscription:

M ET H O D IN ST RUC T IO N S

REST API Pricings API

Azure CLI az security pricing

PowerShell Set-AzSecurityPricing

Azure Policy Bundle Pricings

Enable Microsoft Defender for Azure SQL Database at the resource level
We recommend enabling Microsoft Defender plans at the subscription level so that new resources are
automatically protected. However, if you have an organizational reason to enable Microsoft Defender for Cloud
at the server level, use the following steps:
1. From the Azure portal, open your server or managed instance.
2. Under the Security heading, select Defender for Cloud .
3. Select Enable Microsoft Defender for SQL .
NOTE
A storage account is automatically created and configured to store your Vulnerability Assessment scan results. If
you've already enabled Microsoft Defender for another server in the same resource group and region, then the existing
storage account is used.
The cost of Microsoft Defender for SQL is aligned with Microsoft Defender for Cloud standard tier pricing per node, where
a node is the entire server or managed instance. You are thus paying only once for protecting all databases on the server
or managed instance with Microsoft Defender for SQL. You can evaluate Microsoft Defender for Cloud with a free trial.

Manage Microsoft Defender for SQL settings


To view and manage Microsoft Defender for SQL settings:
1. From the Security area of your server or managed instance, select Defender for Cloud .
On this page, you'll see the status of Microsoft Defender for SQL:
2. If Microsoft Defender for SQL is enabled, you'll see a Configure link as shown in the previous graphic. To
edit the settings for Microsoft Defender for SQL, select Configure .

3. Make the necessary changes and select Save .

Next steps
Learn more about Vulnerability Assessment
Learn more about Advanced Threat Protection
Learn more about Microsoft Defender for Cloud
SQL Advanced Threat Protection
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
SQL Server on Azure VM Azure Arc-enabled SQL Server
Advanced Threat Protection for Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics,
SQL Server on Azure Virtual Machines and Azure Arc-enabled SQL Server detects anomalous activities
indicating unusual and potentially harmful attempts to access or exploit databases.
Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central
Microsoft Defender for SQL portal.

Overview
Advanced Threat Protection provides a new layer of security, which enables customers to detect and respond to
potential threats as they occur by providing security alerts on anomalous activities. Users receive an alert upon
suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database
access and queries patterns. Advanced Threat Protection integrates alerts with Microsoft Defender for Cloud,
which include details of suspicious activity and recommend action on how to investigate and mitigate the threat.
Advanced Threat Protection makes it simple to address potential threats to the database without the need to be
a security expert or manage advanced security monitoring systems.
For a full investigation experience, it is recommended to enable auditing, which writes database events to an
audit log in your Azure storage account. To enable auditing, see Auditing for Azure SQL Database and Azure
Synapse or Auditing for Azure SQL Managed Instance.

Alerts
Advanced Threat Protection detects anomalous activities indicating unusual and potentially harmful attempts to
access or exploit databases. For a list of alerts, see the Alerts for SQL Database and Azure Synapse Analytics in
Microsoft Defender for Cloud.

Explore detection of a suspicious event


You receive an email notification upon detection of anomalous database activities. The email provides
information on the suspicious security event including the nature of the anomalous activities, database name,
server name, application name, and the event time. In addition, the email provides information on possible
causes and recommended actions to investigate and mitigate the potential threat to the database.
1. Click the View recent SQL aler ts link in the email to launch the Azure portal and show the Microsoft
Defender for Cloud alerts page, which provides an overview of active threats detected on the database.

2. Click a specific alert to get additional details and actions for investigating this threat and remediating
future threats.
For example, SQL injection is one of the most common Web application security issues on the Internet
that is used to attack data-driven applications. Attackers take advantage of application vulnerabilities to
inject malicious SQL statements into application entry fields, breaching or modifying data in the
database. For SQL Injection alerts, the alert’s details include the vulnerable SQL statement that was
exploited.

Explore alerts in the Azure portal


Advanced Threat Protection integrates its alerts with Microsoft Defender for Cloud. Live SQL Advanced Threat
Protection tiles within the database and SQL Microsoft Defender for Cloud blades in the Azure portal track the
status of active threats.
Click Advanced Threat Protection aler t to launch the Microsoft Defender for Cloud alerts page and get an
overview of active SQL threats detected on the database.
Next steps
Learn more about Advanced Threat Protection in Azure SQL Database & Azure Synapse.
Learn more about Advanced Threat Protection in Azure SQL Managed Instance.
Learn more about Microsoft Defender for SQL.
Learn more about Azure SQL Database auditing
Learn more about Microsoft Defender for Cloud For more information on pricing, see the Azure SQL
Database pricing page
Data Discovery & Classification
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Data Discovery & Classification is built into Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics. It provides basic capabilities for discovering, classifying, labeling, and reporting the sensitive
data in your databases.
Your most sensitive data might include business, financial, healthcare, or personal information. It can serve as
infrastructure for:
Helping to meet standards for data privacy and requirements for regulatory compliance.
Various security scenarios, such as monitoring (auditing) access to sensitive data.
Controlling access to and hardening the security of databases that contain highly sensitive data.

NOTE
For information about SQL Server on-premises, see SQL Data Discovery & Classification.

What is Data Discovery & Classification?


Data Discovery & Classification currently supports the following capabilities:
Discover y and recommendations: The classification engine scans your database and identifies
columns that contain potentially sensitive data. It then provides you with an easy way to review and apply
recommended classification via the Azure portal.
Labeling: You can apply sensitivity-classification labels persistently to columns by using new metadata
attributes that have been added to the SQL Server database engine. This metadata can then be used for
sensitivity-based auditing scenarios.
Quer y result-set sensitivity: The sensitivity of a query result set is calculated in real time for auditing
purposes.
Visibility: You can view the database-classification state in a detailed dashboard in the Azure portal. Also,
you can download a report in Excel format to use for compliance and auditing purposes and other needs.

Discover, classify, and label sensitive columns


This section describes the steps for:
Discovering, classifying, and labeling columns that contain sensitive data in your database.
Viewing the current classification state of your database and exporting reports.
The classification includes two metadata attributes:
Labels : The main classification attributes, used to define the sensitivity level of the data stored in the column.
Information types : Attributes that provide more granular information about the type of data stored in the
column.
Information Protection policy
Azure SQL offers both SQL Information Protection policy and Microsoft Information Protection policy in data
classification, and you can choose either of these two policies based on your requirement.

SQL Information Protection policy


Data Discovery & Classification comes with a built-in set of sensitivity labels and information types with
discovery logic which is native to the SQL logical server. You can continue using the protection labels available in
the default policy file, or you can customize this taxonomy. You can define a set and ranking of classification
constructs specifically for your environment.
Define and customize your classification taxonomy
You define and customize of your classification taxonomy in one central place for your entire Azure organization.
That location is in Microsoft Defender for Cloud, as part of your security policy. Only someone with
administrative rights on the organization's root management group can do this task.
As part of policy management, you can define custom labels, rank them, and associate them with a selected set
of information types. You can also add your own custom information types and configure them with string
patterns. The patterns are added to the discovery logic for identifying this type of data in your databases.
For more information, see Customize the SQL information protection policy in Microsoft Defender for Cloud
(Preview).
After the organization-wide policy has been defined, you can continue classifying individual databases by using
your customized policy.
Classify database in SQL Information Protection policy mode

NOTE
The below example uses Azure SQL Database, but you should select the appropriate product that you want to configure
Data Discovery & Classification.

1. Go to the Azure portal.


2. Go to Data Discover y & Classification under the Security heading in your Azure SQL Database pane.
The Overview tab includes a summary of the current classification state of the database. The summary
includes a detailed list of all classified columns, which you can also filter to show only specific schema
parts, information types, and labels. If you haven’t classified any columns yet, skip to step 4.
3. To download a report in Excel format, select Expor t in the top menu of the pane.
4. To begin classifying your data, select the Classification tab on the Data Discover y & Classification
page.
The classification engine scans your database for columns containing potentially sensitive data and
provides a list of recommended column classifications.
5. View and apply classification recommendations:
To view the list of recommended column classifications, select the recommendations panel at the
bottom of the pane.
To accept a recommendation for a specific column, select the check box in the left column of the
relevant row. To mark all recommendations as accepted, select the leftmost check box in the
recommendations table header.
To apply the selected recommendations, select Accept selected recommendations .
6. You can also classify columns manually, as an alternative or in addition to the recommendation-based
classification:
a. Select Add classification in the top menu of the pane.
b. In the context window that opens, select the schema, table, and column that you want to classify,
and the information type and sensitivity label.
c. Select Add classification at the bottom of the context window.

7. To complete your classification and persistently label (tag) the database columns with the new
classification metadata, select Save in the Classification page.
Microsoft Information Protection policy
Microsoft Information Protection (MIP) labels provide a simple and uniform way for users to classify sensitive
data uniformly across different Microsoft applications. MIP sensitivity labels are created and managed in
Microsoft 365 compliance center. To learn how to create and publish MIP sensitive labels in Microsoft 365
compliance center, see the article, Create and publish sensitivity labels.
Prerequisites to switch to MIP policy
The current user has tenant wide security admin permissions to apply policy at the tenant root management
group level. For more information, see Grant tenant-wide permissions to yourself.
Your tenant has an active Microsoft 365 subscription and you have labels published for the current user. For
more information, see Create and configure sensitivity labels and their policies.
Classify database in Microsoft Information Protection policy mode
1. Go to the Azure portal.
2. Navigate to your database in Azure SQL Database
3. Go to Data Discover y & Classification under the Security heading in your database pane.
4. To select Microsoft Information Protection policy , select the Over view tab, and select Configure .
5. Select Microsoft Information Protection policy in the Information Protection policy options, and
select Save .
6. If you go to the Classification tab, or select Add classification , you will now see M365 sensitivity
labels appear in the Sensitivity label dropdown.

Information type is [n/a] while you are in MIP policy mode and automatic data discovery &
recommendations remain disabled.
A warning icon may appear against an already classified column if the column was classified using a
different Information Protection policy than the currently active policy. For example, if the column was
classified with a label using SQL Information Protection policy earlier and now you are in Microsoft
Information Protection policy mode. You will see a warning icon against that specific column. This
warning icon does not indicate any problem, but is used only for information purposes.

Audit access to sensitive data


An important aspect of the classification is the ability to monitor access to sensitive data. Azure SQL Auditing
has been enhanced to include a new field in the audit log called data_sensitivity_information . This field logs
the sensitivity classifications (labels) of the data that was returned by a query. Here's an example:

These are the activities that are actually auditable with sensitivity information:
ALTER TABLE ... DROP COLUMN
BULK INSERT
DELETE
INSERT
MERGE
UPDATE
UPDATETEXT
WRITETEXT
DROP TABLE
BACKUP
DBCC CloneDatabase
SELECT INTO
INSERT INTO EXEC
TRUNCATE TABLE
DBCC SHOW_STATISTICS
sys.dm_db_stats_histogram
Use sys.fn_get_audit_file to return information from an audit file stored in an Azure Storage account.

Permissions
These built-in roles can read the data classification of a database:
Owner
Reader
Contributor
SQL Security Manager
User Access Administrator
These are the required actions to read the data classification of a database are:
Microsoft.Sql/servers/databases/currentSensitivityLabels/*
Microsoft.Sql/servers/databases/recommendedSensitivityLabels/*
Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
These built-in roles can modify the data classification of a database:
Owner
Contributor
SQL Security Manager
This is the required action to modify the data classification of a database are:
Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
Learn more about role-based permissions in Azure RBAC.

NOTE
The Azure SQL built-in roles in this section apply to a dedicated SQL pool (formerly SQL DW) but are not available for
dedicated SQL pools and other SQL resources within Azure Synapse workspaces. For SQL resources in Azure Synapse
workspaces, use the available actions for data classification to create custom Azure roles as needed for labelling. For more
information on the Microsoft.Synapse/workspaces/sqlPools provider operations, see Microsoft.Synapse.

Manage classifications
You can use T-SQL, a REST API, or PowerShell to manage classifications.
Use T -SQL
You can use T-SQL to add or remove column classifications, and to retrieve all classifications for the entire
database.

NOTE
When you use T-SQL to manage labels, there's no validation that labels that you add to a column exist in the
organization's information-protection policy (the set of labels that appear in the portal recommendations). So, it's up to
you to validate this.

For information about using T-SQL for classifications, see the following references:
To add or update the classification of one or more columns: ADD SENSITIVITY CLASSIFICATION
To remove the classification from one or more columns: DROP SENSITIVITY CLASSIFICATION
To view all classifications on the database: sys.sensitivity_classifications
Use PowerShell cmdlets
Manage classifications and recommendations for Azure SQL Database and Azure SQL Managed Instance using
PowerShell.
PowerShell cmdlets for Azure SQL Database
Get-AzSqlDatabaseSensitivityClassification
Set-AzSqlDatabaseSensitivityClassification
Remove-AzSqlDatabaseSensitivityClassification
Get-AzSqlDatabaseSensitivityRecommendation
Enable-AzSqlDatabaSesensitivityRecommendation
Disable-AzSqlDatabaseSensitivityRecommendation
PowerShell cmdlets for Azure SQL Managed Instance
Get-AzSqlInstanceDatabaseSensitivityClassification
Set-AzSqlInstanceDatabaseSensitivityClassification
Remove-AzSqlInstanceDatabaseSensitivityClassification
Get-AzSqlInstanceDatabaseSensitivityRecommendation
Enable-AzSqlInstanceDatabaseSensitivityRecommendation
Disable-AzSqlInstanceDatabaseSensitivityRecommendation
Use the REST API
You can use the REST API to programmatically manage classifications and recommendations. The published
REST API supports the following operations:
Create Or Update: Creates or updates the sensitivity label of the specified column.
Delete: Deletes the sensitivity label of the specified column.
Disable Recommendation: Disables sensitivity recommendations on the specified column.
Enable Recommendation: Enables sensitivity recommendations on the specified column. (Recommendations
are enabled by default on all columns.)
Get: Gets the sensitivity label of the specified column.
List Current By Database: Gets the current sensitivity labels of the specified database.
List Recommended By Database: Gets the recommended sensitivity labels of the specified database.

Retrieve classifications metadata using SQL drivers


You can use the following SQL drivers to retrieve classification metadata:
ODBC Driver
OLE DB Driver
JDBC Driver
Microsoft Drivers for PHP for SQL Server

FAQ - Advanced classification capabilities


Question : Will Microsoft Purview replace SQL Data Discovery & Classification or will SQL Data Discovery &
Classification be retired soon? Answer : We continue to support SQL Data Discovery & Classification and
encourage you to adopt Microsoft Purview which has richer capabilities to drive advanced classification
capabilities and data governance. If we decide to retire any service, feature, API or SKU, you will receive advance
notice including a migration or transition path. Learn more about Microsoft Lifecycle policies here.

Next steps
Consider configuring Azure SQL Auditing for monitoring and auditing access to your classified sensitive data.
For a presentation that includes data Discovery & Classification, see Discovering, classifying, labeling &
protecting SQL data | Data Exposed.
To classify your Azure SQL Databases and Azure Synapse Analytics with Microsoft Purview labels using T-
SQL commands, see Classify your Azure SQL data using Microsoft Purview labels.
Dynamic data masking
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support dynamic data
masking. Dynamic data masking limits sensitive data exposure by masking it to non-privileged users.
Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate
how much of the sensitive data to reveal with minimal impact on the application layer. It’s a policy-based
security feature that hides the sensitive data in the result set of a query over designated database fields, while
the data in the database is not changed.
For example, a service representative at a call center might identify a caller by confirming several characters of
their email address, but the complete email address shouldn't be revealed to the service representative. A
masking rule can be defined that masks all the email address in the result set of any query. As another example,
an appropriate data mask can be defined to protect personal data, so that a developer can query production
environments for troubleshooting purposes without violating compliance regulations.

Dynamic data masking basics


You set up a dynamic data masking policy in the Azure portal by selecting the Dynamic Data Masking blade
under Security in your SQL Database configuration pane. This feature cannot be set using portal for SQL
Managed Instance. For more information, see Dynamic Data Masking.
Dynamic data masking policy
SQL users excluded from masking - A set of SQL users or Azure AD identities that get unmasked data in
the SQL query results. Users with administrator privileges are always excluded from masking, and see the
original data without any mask.
Masking rules - A set of rules that define the designated fields to be masked and the masking function that
is used. The designated fields can be defined using a database schema name, table name, and column name.
Masking functions - A set of methods that control the exposure of data for different scenarios.

M A SK IN G F UN C T IO N M A SK IN G LO GIC

Default Full masking according to the data types of the


designated fields

• Use XXXX or fewer Xs if the size of the field is less than 4


characters for string data types (nchar, ntext, nvarchar).
• Use a zero value for numeric data types (bigint, bit,
decimal, int, money, numeric, smallint, smallmoney, tinyint,
float, real).
• Use 01-01-1900 for date/time data types (date, datetime2,
datetime, datetimeoffset, smalldatetime, time).
• For SQL variant, the default value of the current type is
used.
• For XML the document <masked/> is used.
• Use an empty value for special data types (timestamp
table, hierarchyid, GUID, binary, image, varbinary spatial
types).
M A SK IN G F UN C T IO N M A SK IN G LO GIC

Credit card Masking method, which exposes the last four digits
of the designated fields and adds a constant string as a
prefix in the form of a credit card.

XXXX-XXXX-XXXX-1234

Email Masking method, which exposes the first letter and


replaces the domain with XXX.com using a constant
string prefix in the form of an email address.

aXX@XXXX.com

Random number Masking method, which generates a random number


according to the selected boundaries and actual data types.
If the designated boundaries are equal, then the masking
function is a constant number.

Custom text Masking method, which exposes the first and last
characters and adds a custom padding string in the middle.
If the original string is shorter than the exposed prefix and
suffix, only the padding string is used.
prefix[padding]suffix

Recommended fields to mask


The DDM recommendations engine, flags certain fields from your database as potentially sensitive fields, which
may be good candidates for masking. In the Dynamic Data Masking blade in the portal, you will see the
recommended columns for your database. All you need to do is click Add Mask for one or more columns and
then Save to apply a mask for these fields.

Manage dynamic data masking using T-SQL


To create a dynamic data mask, see Creating a Dynamic Data Mask.
To add or edit a mask on an existing column, see Adding or Editing a Mask on an Existing Column.
To grant permissions to view unmasked data, see Granting Permissions to View Unmasked Data.
To drop a dynamic data mask, see Dropping a Dynamic Data Mask.

Set up dynamic data masking for your database using PowerShell


cmdlets
Data masking policies
Get-AzSqlDatabaseDataMaskingPolicy
Set-AzSqlDatabaseDataMaskingPolicy
Data masking rules
Get-AzSqlDatabaseDataMaskingRule
New-AzSqlDatabaseDataMaskingRule
Remove-AzSqlDatabaseDataMaskingRule
Set-AzSqlDatabaseDataMaskingRule

Set up dynamic data masking for your database using the REST API
You can use the REST API to programmatically manage data masking policy and rules. The published REST API
supports the following operations:
Data masking policies
Create Or Update: Creates or updates a database data masking policy.
Get: Gets a database data masking policy.
Data masking rules
Create Or Update: Creates or updates a database data masking rule.
List By Database: Gets a list of database data masking rules.

Permissions
These are the built-in roles to configure dynamic data masking is:
SQL Security Manager
SQL DB Contributor
SQL Server Contributor
These are the required actions to use dynamic data masking:
Read/Write:
Microsoft.Sql/servers/databases/dataMaskingPolicies/* Read:
Microsoft.Sql/servers/databases/dataMaskingPolicies/read Write:
Microsoft.Sql/servers/databases/dataMaskingPolicies/write
To learn more about permissions when using dynamic data masking with T-SQL command, see Permissions

Granular permission example


Prevent unauthorized access to sensitive data and gain control by masking it to an unauthorized user at
different levels of the database. You can grant or revoke UNMASK permission at the database-level, schema-
level, table-level or at the column-level to a user. Using UNMASK permission provides a more granular way to
control and limit unauthorized access to data stored in the database and improve data security management.
1. Create schema to contain user tables

CREATE SCHEMA Data;


GO

2. Create table with masked columns


CREATE TABLE Data.Membership (
MemberID int IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
FirstName varchar(100) MASKED WITH (FUNCTION = 'partial(1, "xxxxx", 1)') NULL,
LastName varchar(100) NOT NULL,
Phone varchar(12) MASKED WITH (FUNCTION = 'default()') NULL,
Email varchar(100) MASKED WITH (FUNCTION = 'email()') NOT NULL,
DiscountCode smallint MASKED WITH (FUNCTION = 'random(1, 100)') NULL,
BirthDay datetime MASKED WITH (FUNCTION = 'default()') NULL
);

3. Insert sample data

INSERT INTO Data.Membership (FirstName, LastName, Phone, Email, DiscountCode, BirthDay)


VALUES
('Roberto', 'Tamburello', '555.123.4567', 'RTamburello@contoso.com', 10, '1985-01-25 03:25:05'),
('Janice', 'Galvin', '555.123.4568', 'JGalvin@contoso.com.co', 5,'1990-05-14 11:30:00'),
('Shakti', 'Menon', '555.123.4570', 'SMenon@contoso.net', 50,'2004-02-29 14:20:10'),
('Zheng', 'Mu', '555.123.4569', 'ZMu@contoso.net', 40,'1990-03-01 06:00:00');

4. Create schema to contain service tables

CREATE SCHEMA Service;


GO

5. Create service table with masked columns

CREATE TABLE Service.Feedback (


MemberID int IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
Feedback varchar(100) MASKED WITH (FUNCTION = 'default()') NULL,
Rating int MASKED WITH (FUNCTION='default()'),
Received_On datetime)
);

6. Insert sample data

INSERT INTO Service.Feedback(Feedback,Rating,Received_On)


VALUES
('Good',4,'2022-01-25 11:25:05'),
('Excellent', 5, '2021-12-22 08:10:07'),
('Average', 3, '2021-09-15 09:00:00');

7. Create different users in the database

CREATE USER ServiceAttendant WITHOUT LOGIN;


GO

CREATE USER ServiceLead WITHOUT LOGIN;


GO

CREATE USER ServiceManager WITHOUT LOGIN;


GO

CREATE USER ServiceHead WITHOUT LOGIN;


GO

8. Grant read permissions to the users in the database


ALTER ROLE db_datareader ADD MEMBER ServiceAttendant;

ALTER ROLE db_datareader ADD MEMBER ServiceLead;

ALTER ROLE db_datareader ADD MEMBER ServiceManager;

ALTER ROLE db_datareader ADD MEMBER ServiceHead;

9. Grant different UNMASK permissions to users

--Grant column level UNMASK permission to ServiceAttendant


GRANT UNMASK ON Data.Membership(FirstName) TO ServiceAttendant;

-- Grant table level UNMASK permission to ServiceLead


GRANT UNMASK ON Data.Membership TO ServiceLead;

-- Grant schema level UNMASK permission to ServiceManager


GRANT UNMASK ON SCHEMA::Data TO ServiceManager;
GRANT UNMASK ON SCHEMA::Service TO ServiceManager;

--Grant database level UNMASK permission to ServiceHead;


GRANT UNMASK TO ServiceHead;

10. Query the data under the context of user ServiceAttendant

EXECUTE AS USER='ServiceAttendant';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;

11. Query the data under the context of user ServiceLead

EXECUTE AS USER='ServiceLead';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;

12. Query the data under the context of user ServiceManager

EXECUTE AS USER='ServiceManager';
SELECT MemberID,FirstName,LastName,Phone,Email FROM Data.Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;

13. Query the data under the context of user ServiceHead

EXECUTE AS USER='ServiceHead';
SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data.Membership;
SELECT MemberID,Feedback,Rating FROM Service.Feedback;
REVERT;

14. To revoke UNMASK permissions, use the following T-SQL statements:


REVOKE UNMASK ON Data.Membership(FirstName) FROM ServiceAttendant;

REVOKE UNMASK ON Data.Membership FROM ServiceLead;

REVOKE UNMASK ON SCHEMA::Data FROM ServiceManager;

REVOKE UNMASK ON SCHEMA::Service FROM ServiceManager;

REVOKE UNMASK FROM ServiceHead;

See also
Dynamic Data Masking for SQL Server.
Data Exposed episode about Granular Permissions for Azure SQL Dynamic Data Masking on Channel 9.
SQL vulnerability assessment helps you identify
database vulnerabilities
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
SQL vulnerability assessment is an easy-to-configure service that can discover, track, and help you remediate
potential database vulnerabilities. Use it to proactively improve your database security.
Vulnerability assessment is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Vulnerability assessment can be accessed and managed via the central
Microsoft Defender for SQL portal.

NOTE
Vulnerability assessment is supported for Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse
Analytics. Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics are referred to
collectively in the remainder of this article as databases, and the server is referring to the server that hosts databases for
Azure SQL Database and Azure Synapse.

What is SQL vulnerability assessment?


SQL vulnerability assessment is a service that provides visibility into your security state. Vulnerability
assessment includes actionable steps to resolve security issues and enhance your database security. It can help
you to monitor a dynamic database environment where changes are difficult to track and improve your SQL
security posture.
Vulnerability assessment is a scanning service built into Azure SQL Database. The service employs a knowledge
base of rules that flag security vulnerabilities. It highlights deviations from best practices, such as
misconfigurations, excessive permissions, and unprotected sensitive data.
The rules are based on Microsoft's best practices and focus on the security issues that present the biggest risks
to your database and its valuable data. They cover database-level issues and server-level security issues, like
server firewall settings and server-level permissions.
Results of the scan include actionable steps to resolve each issue and provide customized remediation scripts
where applicable. You can customize an assessment report for your environment by setting an acceptable
baseline for:
Permission configurations
Feature configurations
Database settings

Configure vulnerability assessment


Take the following steps to configure the vulnerability assessment:
1. In the Azure portal, open the specific resource in Azure SQL Database, SQL Managed Instance Database,
or Azure Synapse.
2. Under the Security heading, select Defender for Cloud .
3. Select Configure on the link to open the Microsoft Defender for SQL settings pane for either the entire
server or managed instance.

NOTE
SQL vulnerability assessment requires Microsoft Defender for SQL plan to be able to run scans. For more
information about how to enable Microsoft Defender for SQL, see Microsoft Defender for SQL.

4. In the Ser ver settings page, define the Microsoft Defender for SQL settings:

a. Configure a storage account where your scan results for all databases on the server or managed
instance will be stored. For information about storage accounts, see About Azure storage accounts.

TIP
For more information about storing vulnerability assessment scans behind firewalls and VNets, see Store
vulnerability assessment scan results in a storage account accessible behind firewalls and VNets.

b. To configure vulnerability assessments to automatically run weekly scans to detect security


misconfigurations, set Periodic recurring scans to On . The results are sent to the email
addresses you provide in Send scan repor ts to . You can also send email notification to admins
and subscription owners by enabling Also send email notification to admins and
subscription owners .

NOTE
Each database is randomly assigned a scan time on a set day of the week. Email notifications are
scheduled randomly per server on a set day of the week. The email notification report includes data from
all database scans that were executed during the preceding week.

5. SQL vulnerability assessment scans can also be run on-demand:


a. From the resource's Defender for Cloud page, select View additional findings in
Vulnerability Assessment to access the scan results from previous scans.

b. To run an on-demand scan to scan your database for vulnerabilities, select Scan from the toolbar:
NOTE
The scan is lightweight and safe. It takes a few seconds to run and is entirely read-only. It doesn't make any changes to
your database.

Remediate vulnerabilities
When a vulnerability scan completes, the report is displayed in the Azure portal. The report presents:
An overview of your security state
The number of issues that were found, and
A summary by severity of the risks
A list of the findings for further investigations

To remediate the vulnerabilities discovered:


1. Review your results and determine which of the report's findings are true security issues for your
environment.
2. Select each failed result to understand its impact and why the security check failed.
TIP
The findings details page includes actionable remediation information explaining how to resolve the issue.

3. As you review your assessment results, you can mark specific results as being an acceptable baseline in
your environment. A baseline is essentially a customization of how the results are reported. In
subsequent scans, results that match the baseline are considered as passes. After you've established your
baseline security state, vulnerability assessment only reports on deviations from the baseline. In this way,
you can focus your attention on the relevant issues.

4. If you change the baselines, use the Scan button to run an on-demand scan and view the customized
report. Any findings you've added to the baseline will now appear in Passed with an indication that
they've passed because of the baseline changes.

Your vulnerability assessment scans can now be used to ensure that your database maintains a high level of
security, and that your organizational policies are met.

Advanced capabilities
View scan history
Select Scan Histor y in the vulnerability assessment pane to view a history of all scans previously run on this
database. Select a particular scan in the list to view the detailed results of that scan.
Disable specific findings from Microsoft Defender for Cloud (preview)
If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it.
Disabled findings don't impact your secure score or generate unwanted noise.
When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings.
Typical scenarios include:
Disable findings with severity below medium
Disable findings that are non-patchable
Disable findings from benchmarks that aren't of interest for a defined scope

IMPORTANT
To disable specific findings, you need permissions to edit a policy in Azure Policy. Learn more in Azure RBAC permissions in
Azure Policy.

To create a rule:
1. From the recommendations detail page for Vulnerability assessment findings on your SQL
ser vers on machines should be remediated , select Disable rule .
2. Select the relevant scope.
3. Define your criteria. You can use any of the following criteria:
Finding ID
Severity
Benchmarks
4. Select Apply rule . Changes might take up to 24hrs to take effect.
5. To view, override, or delete a rule:
a. Select Disable rule .
b. From the scope list, subscriptions with active rules show as Rule applied .

c. To view or delete the rule, select the ellipsis menu ("...").

Manage vulnerability assessments programmatically


Azure PowerShell
Azure CLI
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.

You can use Azure PowerShell cmdlets to programmatically manage your vulnerability assessments. The
supported cmdlets are:

C M DL ET N A M E A S A L IN K DESC RIP T IO N

Clear-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Clears the vulnerability assessment rule baseline.


First, set the baseline before you use this cmdlet to clear it.

Clear-AzSqlDatabaseVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a database.

Clear- Clears the vulnerability assessment rule baseline of a


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline managed database.
First, set the baseline before you use this cmdlet to clear it.

Clear-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a managed


database.

Clear-AzSqlInstanceVulnerabilityAssessmentSetting Clears the vulnerability assessment settings of a managed


instance.

Convert-AzSqlDatabaseVulnerabilityAssessmentScan Converts vulnerability assessment scan results of a database


to an Excel file (export).

Convert-AzSqlInstanceDatabaseVulnerabilityAssessmentScan Converts vulnerability assessment scan results of a managed


database to an Excel file (export).

Get-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Gets the vulnerability assessment rule baseline of a database


for a given rule.

Get- Gets the vulnerability assessment rule baseline of a managed


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline database for a given rule.

Get-AzSqlDatabaseVulnerabilityAssessmentScanRecord Gets all vulnerability assessment scan records associated


with a given database.

Get- Gets all vulnerability assessment scan records associated


AzSqlInstanceDatabaseVulnerabilityAssessmentScanRecord with a given managed database.

Get-AzSqlDatabaseVulnerabilityAssessmentSetting Returns the vulnerability assessment settings of a database.

Get-AzSqlInstanceDatabaseVulnerabilityAssessmentSetting Returns the vulnerability assessment settings of a managed


database.
C M DL ET N A M E A S A L IN K DESC RIP T IO N

Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline Sets the vulnerability assessment rule baseline.

Set- Sets the vulnerability assessment rule baseline for a


AzSqlInstanceDatabaseVulnerabilityAssessmentRuleBaseline managed database.

Start-AzSqlDatabaseVulnerabilityAssessmentScan Triggers the start of a vulnerability assessment scan on a


database.

Start-AzSqlInstanceDatabaseVulnerabilityAssessmentScan Triggers the start of a vulnerability assessment scan on a


managed database.

Update-AzSqlDatabaseVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a database.

Update- Updates the vulnerability assessment settings of a managed


AzSqlInstanceDatabaseVulnerabilityAssessmentSetting database.

Update-AzSqlInstanceVulnerabilityAssessmentSetting Updates the vulnerability assessment settings of a managed


instance.

For a script example, see Azure SQL vulnerability assessment PowerShell support.
Using Resource Manager templates
To configure vulnerability assessment baselines by using Azure Resource Manager templates, use the
Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines type.

Ensure that you have enabled vulnerabilityAssessments before you add baselines.
Here's an example for defining Baseline Rule VA2065 to master database and VA1143 to user database as
resources in a Resource Manager template:
"resources": [
{
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/master')]",
"properties": {
"baselineResults": [
{
"result": [
"FirewallRuleName3",
"StartIpAddress",
"EndIpAddress"
]
},
{
"result": [
"FirewallRuleName4",
"62.92.15.68",
"62.92.15.68"
]
}
]
},
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name'),
'/default/VA2130/Default')]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments', parameters('server_name'),
'Default')]"
],
"properties": {
"baselineResults": [
{
"result": [
"dbo"
]
}
]
}
}
]

For master database and user database, the resource names are defined differently:
Master database - "name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/master ')]",
User database - "name": "[concat(parameters('server_name'),'/', parameters('database_name') ,
'/default/VA2065/default ')]",
To handle Boolean types as true/false, set the baseline result with binary input like "1"/"0".
{
"type": "Microsoft.Sql/servers/databases/vulnerabilityAssessments/rules/baselines",
"apiVersion": "2018-06-01-preview",
"name": "[concat(parameters('server_name'),'/', parameters('database_name'),
'/default/VA1143/Default')]",

"dependsOn": [
"[resourceId('Microsoft.Sql/servers/vulnerabilityAssessments', parameters('server_name'),
'Default')]"
],

"properties": {
"baselineResults": [
{
"result": [
"1"
]
}
]
}

Permissions
One of the following permissions is required to see vulnerability assessment results in the Microsoft Defender
for Cloud recommendation SQL databases should have vulnerability findings resolved :
Security Admin
Security Reader
The following permissions are required to changes vulnerability assessment settings:
SQL Security Manager
Storage Blob Data Reader
Owner role on the storage account
The following permissions are required to open links in email notifications about scan results or to view scan
results at the resource-level:
SQL Security Manager
Storage Blob Data Reader

Data residency
SQL Vulnerability Assessment queries the SQL server using publicly available queries under Defender for Cloud
recommendations for SQL Vulnerability Assessment, and stores the query results. The data is stored in the
configured user-owned storage account.
SQL Vulnerability Assessment allows you to specify the region where your data will be stored by choosing the
location of the storage account. The user is responsible for the security and data resiliency of the storage
account.

Next steps
Learn more about Microsoft Defender for SQL.
Learn more about data discovery and classification.
Learn more about Storing vulnerability assessment scan results in a storage account accessible behind
firewalls and VNets.
SQL Vulnerability Assessment rules reference guide
7/12/2022 • 32 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
SQL Server (all supported versions)
This article lists the set of built-in rules that are used to flag security vulnerabilities and highlight deviations from
best practices, such as misconfigurations and excessive permissions. The rules are based on Microsoft's best
practices and focus on the security issues that present the biggest risks to your database and its valuable data.
They cover both database-level issues as well as server-level security issues, like server firewall settings and
server-level permissions. These rules also represent many of the requirements from various regulatory bodies
to meet their compliance standards.
The rules shown in your database scans depend on the SQL version and platform that was scanned.
To learn about how to implement Vulnerability Assessment in Azure, see Implement Vulnerability Assessment.
For a list of changes to these rules, see SQL Vulnerability Assessment rules changelog.

Rule categories
SQL Vulnerability Assessment rules have five categories, which are in the following sections:
Authentication and Authorization
Auditing and Logging
Data Protection
Installation Updates and Patches
Surface Area Reduction
1 SQL Ser ver 2012+ refers to all versions of SQL Server 2012 and above.
2 SQL Ser ver 2017+ refers to all versions of SQL Server 2017 and above.
3 SQL Ser ver 2016+ refers to all versions of SQL Server 2016 and above.

Authentication and Authorization


RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1017 Execute permissions High The xp_cmdshell SQL Server 2012+1


on xp_cmdshell from extended stored
all users (except dbo) procedure spawns a
should be revoked Windows command
shell, passing in a
string for execution.
This rule checks that
no users (other than
users with the
CONTROL SERVER
permission like
members of the
sysadmin server role)
have permission to
execute the
xp_cmdshell
extended stored
procedure.

VA1020 Database user GUEST High The guest user SQL Server 2012+
should not be a permits access to a
member of any role database for any SQL Database
logins that are not
mapped to a specific
database user. This
rule checks that no
database roles are
assigned to the
Guest user.

VA1042 Database ownership High Cross database SQL Server 2012+


chaining should be ownership chaining is
disabled for all an extension of SQL Managed
databases except for ownership chaining, Instance
master , msdb , and except it does cross
tempdb the database
boundary. This rule
checks that this
option is disabled for
all databases except
for master , msdb ,
and tempdb . For
master , msdb , and
tempdb , cross
database ownership
chaining is enabled
by default.

VA1043 Principal GUEST Medium The guest user SQL Server 2012+
should not have permits access to a
access to any user database for any SQL Managed
database logins that are not Instance
mapped to a specific
database user. This
rule checks that the
guest user cannot
connect to any
database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1046 CHECK_POLICY Low CHECK_POLICY SQL Server 2012+


should be enabled option enables
for all SQL logins verifying SQL logins SQL Managed
against the domain Instance
policy. This rule
checks that
CHECK_POLICY
option is enabled for
all SQL logins.

VA1047 Password expiration Low Password expiration SQL Server 2012+


check should be policies are used to
enabled for all SQL manage the lifespan SQL Managed
logins of a password. When Instance
SQL Server enforces
password expiration
policy, users are
reminded to change
old passwords, and
accounts that have
expired passwords
are disabled. This rule
checks that password
expiration policy is
enabled for all SQL
logins.

VA1048 Database principals High A database principal SQL Server 2012+


should not be that is mapped to
mapped to the sa the sa account can SQL Managed
account be exploited by an Instance
attacker to elevate
permissions to
sysadmin

VA1052 Remove Low The SQL Server 2012+


BUILTIN\Administrat BUILTIN\Administrat
ors as a server login ors group contains
the Windows Local
Administrators
group. In older
versions of Microsoft
SQL Server this
group has
administrator rights
by default. This rule
checks that this
group is removed
from SQL Server.

VA1053 Account with default Low sa is a well-known SQL Server 2012+


name sa should be account with
renamed or disabled principal ID 1. This SQL Managed
rule verifies that the Instance
sa account is either
renamed or disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1054 Excessive permissions Low Every SQL Server SQL Server 2012+
should not be login belongs to the
granted to PUBLIC public server role. SQL Database
role on objects or When a server
columns principal has not
been granted or
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This rule
displays a list of all
securable objects or
columns that are
accessible to all users
through the PUBLIC
role.

VA1058 sa login should be High sa is a well-known SQL Server 2012+


disabled account with
principal ID 1. This SQL Managed
rule verifies that the Instance
sa account is
disabled.

VA1059 xp_cmdshell should High xp_cmdshell spawns SQL Server 2012+


be disabled a Windows command
shell and passes it a SQL Managed
string for execution. Instance
This rule checks that
xp_cmdshell is
disabled.

VA1067 Database Mail XPs Medium This rule checks that SQL Server 2012+
should be disabled Database Mail is
when it is not in use disabled when no
database mail profile
is configured.
Database Mail can be
used for sending e-
mail messages from
the SQL Server
Database Engine and
is disabled by default.
If you are not using
this feature, it is
recommended to
disable it to reduce
the surface area.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1068 Server permissions Low Server level SQL Server 2012+


shouldn't be granted permissions are
directly to principals associated with a SQL Managed
server level object to Instance
regulate which users
can gain access to
the object. This rule
checks that there are
no server level
permissions granted
directly to logins.

VA1070 Database users Low Database users may SQL Server 2012+
shouldn't share the share the same name
same name as a as a server login. This SQL Managed
server login rule validates that Instance
there are no such
users.

VA1072 Authentication mode Medium There are two SQL Server 2012+
should be Windows possible
Authentication authentication
modes: Windows
Authentication mode
and mixed mode.
Mixed mode means
that SQL Server
enables both
Windows
authentication and
SQL Server
authentication. This
rule checks that the
authentication mode
is set to Windows
Authentication.

VA1094 Database Low Permissions are rules SQL Server 2012+


permissions shouldn't associated with a
be granted directly to securable object to SQL Managed
principals regulate which users Instance
can gain access to
the object. This rule
checks that there are
no DB permissions
granted directly to
users.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1095 Excessive permissions Medium Every SQL Server SQL Server 2012+
should not be login belongs to the
granted to PUBLIC public server role. SQL Managed
role When a server Instance
principal has not
been granted or SQL Database
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This displays a
list of all permissions
that are granted to
the PUBLIC role.

VA1096 Principal GUEST Low Each database SQL Server 2012+


should not be includes a user called
granted permissions GUEST. Permissions SQL Managed
in the database granted to GUEST Instance
are inherited by users
who have access to SQL Database
the database but
who do not have a
user account in the
database. This rule
checks that all
permissions have
been revoked from
the GUEST user.

VA1097 Principal GUEST Low Each database SQL Server 2012+


should not be includes a user called
granted permissions GUEST. Permissions SQL Managed
on objects or granted to GUEST Instance
columns are inherited by users
who have access to SQL Database
the database but
who do not have a
user account in the
database. This rule
checks that all
permissions have
been revoked from
the GUEST user.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1099 GUEST user should Low Each database SQL Server 2012+
not be granted includes a user called
permissions on GUEST. Permissions SQL Managed
database securables granted to GUEST Instance
are inherited by users
who have access to SQL Database
the database but
who do not have a
user account in the
database. This rule
checks that all
permissions have
been revoked from
the GUEST user.

VA1246 Application roles Low An application role is SQL Server 2012+


should not be used a database principal
that enables an SQL Managed
application to run Instance
with its own user-like
permissions. SQL Database
Application roles
enable that only
users connecting
through a particular
application can
access specific data.
Application roles are
password-based
(which applications
typically hardcode)
and not permission
based which exposes
the database to app
role impersonation
by password-
guessing. This rule
checks that no
application roles are
defined in the
database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1248 User-defined Medium To easily manage the SQL Server 2012+


database roles permissions in your
should not be databases SQL SQL Managed
members of fixed Server provides Instance
roles several roles, which
are security principals SQL Database
that group other
principals. They are Azure Synapse
like groups in the
Microsoft Windows
operating system.
Database accounts
and other SQL Server
roles can be added
into database-level
roles. Each member
of a fixed-database
role can add other
users to that same
role. This rule checks
that no user-defined
roles are members of
fixed roles.

VA1267 Contained users Medium Contained users are SQL Server 2012+
should use Windows users that exist
Authentication within the database SQL Managed
and do not require a Instance
login mapping. This
rule checks that
contained users use
Windows
Authentication.

VA1280 Server Permissions Medium Every SQL Server SQL Server 2012+
granted to public login belongs to the
should be minimized public server role. SQL Managed
When a server Instance
principal has not
been granted or
denied specific
permissions on a
securable object the
user inherits the
permissions granted
to public on that
object. This rule
checks that server
permissions granted
to public are
minimized.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1282 Orphan roles should Low Orphan roles are SQL Server 2012+
be removed user-defined roles
that have no SQL Managed
members. Eliminate Instance
orphaned roles as
they are not needed SQL Database
on the system. This
rule checks whether Azure Synapse
there are any orphan
roles.

VA2020 Minimal set of High Every SQL Server SQL Server 2012+
principals should be securable has
granted ALTER or permissions SQL Managed
ALTER ANY USER associated with it Instance
database-scoped that can be granted
permissions to principals. SQL Database
Permissions can be
scoped at the server Azure Synapse
level (assigned to
logins and server
roles) or at the
database level
(assigned to
database users and
database roles).
These rules check
that only a minimal
set of principals are
granted ALTER or
ALTER ANY USER
database-scoped
permissions.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2033 Minimal set of Low This rule checks SQL Server 2012+
principals should be which principals are
granted database- granted EXECUTE SQL Managed
scoped EXECUTE permission on Instance
permission on objects or columns to
objects or columns ensure this SQL Database
permission is granted
to a minimal set of Azure Synapse
principals. Every SQL
Server securable has
permissions
associated with it
that can be granted
to principals.
Permissions can be
scoped at the server
level (assigned to
logins and server
roles) or at the
database level
(assigned to
database users,
database roles, or
application roles). The
EXECUTE permission
applies to both
stored procedures
and scalar functions,
which can be used in
computed columns.

VA2103 Unnecessary execute Medium Extended stored SQL Server 2012+


permissions on procedures are DLLs
extended stored that an instance of SQL Managed
procedures should be SQL Server can Instance
revoked dynamically load and
run. SQL Server is
packaged with many
extended stored
procedures that allow
for interaction with
the system DLLs. This
rule checks that
unnecessary execute
permissions on
extended stored
procedures have
been revoked.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2107 Minimal set of High SQL Database SQL Database


principals should be provides two
members of fixed restricted Azure Synapse
Azure SQL DB master administrative roles
database roles in the master
database to which
user accounts can be
added that grant
permissions to either
create databases or
manage logins. This
rule check that a
minimal set of
principals are
members of these
administrative roles.

VA2108 Minimal set of High SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of fixed the permissions. SQL Managed
high impact database Roles are security Instance
roles principals that group
other principals. SQL Database
Database-level roles
are database-wide in Azure Synapse
their permission
scope. This rule
checks that a minimal
set of principals are
members of the fixed
database roles.

VA2109 Minimal set of Low SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of fixed low the permissions. SQL Managed
impact database Roles are security Instance
roles principals that group
other principals. SQL Database
Database-level roles
are database-wide in Azure Synapse
their permission
scope. This rule
checks that a minimal
set of principals are
members of the fixed
database roles.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2110 Execute permissions High Registry extended SQL Server 2012+


to access the registry stored procedures
should be revoked allow Microsoft SQL SQL Managed
Server to read write Instance
and enumerate
values and keys in
the registry. They are
used by Enterprise
Manager to
configure the server.
This rule checks that
the permissions to
execute registry
extended stored
procedures have
been revoked from
all users (other than
dbo).

VA2113 Data Transformation Medium Data Transformation SQL Server 2012+


Services (DTS) Services (DTS), is a
permissions should set of objects and SQL Managed
only be granted to utilities that allow the Instance
SSIS roles automation of
extract, transform,
and load operations
to or from a
database. The objects
are DTS packages
and their
components, and the
utilities are called DTS
tools. This rule checks
that only the SSIS
roles are granted
permissions to use
the DTS system
stored procedures
and the permissions
for the PUBLIC role
to use the DTS
system stored
procedures have
been revoked.

VA2114 Minimal set of High SQL Server provides SQL Server 2012+
principals should be roles to help manage
members of high permissions. Roles SQL Managed
impact fixed server are security principals Instance
roles that group other
principals. Server-
level roles are server-
wide in their
permission scope.
This rule checks that
a minimal set of
principals are
members of the fixed
server roles.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2129 Changes to signed High You can sign a stored SQL Server 2012+
modules should be procedure, function,
authorized or trigger with a SQL Database
certificate or an
asymmetric key. This SQL Managed
is designed for Instance
scenarios when
permissions cannot
be inherited through
ownership chaining
or when the
ownership chain is
broken, such as
dynamic SQL. This
rule checks for
changes made to
signed modules,
which could be an
indication of
malicious use.

VA2130 Track all users with Low This check tracks all SQL Database
access to the users with access to a
database database. Make sure Azure Synapse
that these users are
authorized according
to their current role
in the organization.

Auditing and Logging


RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1045 Default trace should Medium Default trace SQL Server 2012+
be enabled provides
troubleshooting SQL Managed
assistance to Instance
database
administrators by
ensuring that they
have the log data
necessary to
diagnose problems
the first time they
occur. This rule
checks that the
default trace is
enabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1091 Auditing of both Low SQL Server Login SQL Server 2012+
successful and failed auditing
login attempts configuration enables
(default trace) should administrators to
be enabled when track the users
'Login auditing' is set logging into SQL
up to track logins Server instances. If
the user chooses to
count on 'Login
auditing' to track
users logging into
SQL Server instances,
then it is important
to enable it for both
successful and failed
login attempts.

VA1093 Maximum number of Low Each SQL Server SQL Server 2012+
error logs should be Error log will have all
12 or more the information
related to failures /
errors that have
occurred since SQL
Server was last
restarted or since the
last time you have
recycled the error
logs. This rule checks
that the maximum
number of error logs
is 12 or more.

VA1258 Database owners are High Database owners can SQL Server 2016+3
as expected perform all
configuration and SQL Database
maintenance
activities on the Azure Synapse
database and can
also drop databases
in SQL Server.
Tracking database
owners is important
to avoid having
excessive permission
for some principals.
Create a baseline
that defines the
expected database
owners for the
database. This rule
checks whether the
database owners are
as defined in the
baseline.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1264 Auditing of both Low SQL Server auditing SQL Server 2012+
successful and failed configuration enables
login attempts administrators to SQL Managed
should be enabled track the users Instance
logging into SQL
Server instances that
they're responsible
for. This rule checks
that auditing is
enabled for both
successful and failed
login attempts.

VA1265 Auditing of both Medium SQL Server auditing SQL Server 2012+
successful and failed configuration enables
login attempts for administrators to SQL Managed
contained DB track users logging Instance
authentication to SQL Server
should be enabled instances that they're
responsible for. This
rule checks that
auditing is enabled
for both successful
and failed login
attempts for
contained DB
authentication.

VA1281 All memberships for Medium User-defined roles SQL Server 2012+
user-defined roles are security principals
should be intended defined by the user SQL Managed
to group principals to Instance
easily manage
permissions. SQL Database
Monitoring these
roles is important to Azure Synapse
avoid having
excessive
permissions. Create a
baseline that defines
expected
membership for each
user-defined role.
This rule checks
whether all
memberships for
user-defined roles are
as defined in the
baseline.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1283 There should be at Low Auditing an instance SQL Server 2012+


least 1 active audit in of the SQL Server
the system Database Engine or SQL Managed
an individual Instance
database involves
tracking and logging
events that occur on
the Database Engine.
The SQL Server Audit
object collects a
single instance of
server or database-
level actions and
groups of actions to
monitor. This rule
checks that there is
at least one active
audit in the system.

VA2061 Auditing should be High Azure SQL Database SQL Database


enabled at the server Auditing tracks
level database events and Azure Synapse
writes them to an
audit log in your
Azure storage
account. Auditing
helps you
understand database
activity and gain
insight into
discrepancies and
anomalies that could
indicate business
concerns or
suspected security
violations as well as
helps you meet
regulatory
compliance. For more
information, see
Azure SQL Auditing.
This rule checks that
auditing is enabled.

Data Protection
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1098 Any Existing SSB or High Service Broker and SQL Server 2012+
Mirroring endpoint Mirroring endpoints
should require AES support different
connection encryption
algorithms including
no-encryption. This
rule checks that any
existing endpoint
requires AES
encryption.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1219 Transparent data Medium Transparent data SQL Server 2012+


encryption should be encryption (TDE)
enabled helps to protect the SQL Managed
database files against Instance
information
disclosure by SQL Database
performing real-time
encryption and Azure Synapse
decryption of the
database, associated
backups, and
transaction log files
'at rest', without
requiring changes to
the application. This
rule checks that TDE
is enabled on the
database.

VA1220 Database High Microsoft SQL Server SQL Server 2012+


communication using can use Secure
TDS should be Sockets Layer (SSL) SQL Managed
protected through or Transport Layer Instance
TLS Security (TLS) to
encrypt data that is
transmitted across a
network between an
instance of SQL
Server and a client
application. This rule
checks that all
connections to the
SQL Server are
encrypted through
TLS.

VA1221 Database Encryption High SQL Server uses SQL Server 2012+
Symmetric Keys encryption keys to
should use AES help secure data SQL Managed
algorithm credentials and Instance
connection
information that is SQL Database
stored in a server
database. SQL Server Azure Synapse
has two kinds of
keys: symmetric and
asymmetric. This rule
checks that Database
Encryption
Symmetric Keys use
AES algorithm.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1222 Cell-Level Encryption High Cell-Level Encryption SQL Server 2012+


keys should use AES (CLE) allows you to
algorithm encrypt your data SQL Managed
using symmetric and Instance
asymmetric keys. This
rule checks that Cell-
Level Encryption
symmetric keys use
AES algorithm.

VA1223 Certificate keys High Certificate keys are SQL Server 2012+
should use at least used in RSA and
2048 bits other encryption SQL Managed
algorithms to protect Instance
data. These keys
need to be of SQL Database
enough length to
secure the user's Azure Synapse
data. This rule checks
that the key's length
is at least 2048 bits
for all certificates.

VA1224 Asymmetric keys' High Database asymmetric SQL Server 2012


length should be at keys are used in
least 2048 bits many encryption SQL Server 2014
algorithms these
keys need to be of SQL Database
enough length to
secure the encrypted
data this rule checks
that all asymmetric
keys stored in the
database are of
length of at least
2048 bits

VA1279 Force encryption High When the Force SQL Server 2012+
should be enabled Encryption option for
for TDS the Database Engine
is enabled all
communications
between client and
server is encrypted
regardless of whether
the 'Encrypt
connection' option
(such as from SSMS)
is checked or not.
This rule checks that
Force Encryption
option is enabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2060 SQL Threat Detection Medium SQL Threat Detection


should be enabled at provides a layer of SQL Managed
the server level security that detects Instance
potential
vulnerabilities and SQL Database
anomalous activity in
databases such as Azure Synapse
SQL injection attacks
and unusual behavior
patterns. When a
potential threat is
detected Threat
Detection sends an
actionable real-time
alert by email and in
Microsoft Defender
for Cloud, which
includes clear
investigation and
remediation steps for
the specific threat.
For more
information, please
see Configure threat
detection. This check
verifies that SQL
Threat Detection is
enabled

Installation Updates and Patches


RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1018 Latest updates High Microsoft periodically SQL Server 2005


should be installed releases Cumulative
Updates (CUs) for SQL Server 2008
each version of SQL
Server. This rule SQL Server 2008
checks whether the
latest CU has been SQL Server 2012
installed for the
particular version of SQL Server 2014
SQL Server being
used, by passing in a SQL Server 2016
string for execution.
This rule checks that SQL Server 2017
all users (except dbo)
do not have
permission to
execute the
xp_cmdshell
extended stored
procedure.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2128 Vulnerability High To run a Vulnerability SQL Server 2012+


Assessment is not Assessment scan on
supported for SQL your SQL Server the SQL Managed
Server versions lower server needs to be Instance
than SQL Server upgraded to SQL
2012 Server 2012 or SQL Database
higher, SQL Server
2008 R2 and below Azure Synapse
are no longer
supported by
Microsoft. For more
information, see

Surface Area Reduction


RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1022 Ad hoc distributed Medium Ad hoc distributed SQL Server 2012+


queries should be queries use the
disabled OPENROWSET and
OPENDATASOURCE
functions to connect
to remote data
sources that use OLE
DB. This rule checks
that ad hoc
distributed queries
are disabled.

VA1023 CLR should be High The CLR allows SQL Server 2012+
disabled managed code to be
hosted by and run in
the Microsoft SQL
Server environment.
This rule checks that
CLR is disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1026 CLR should be Medium The CLR allows SQL Server 2017+2
disabled managed code to be
hosted by and run in SQL Managed
the Microsoft SQL Instance
Server environment.
CLR strict security
treats SAFE and
EXTERNAL_ACCESS
assemblies as if they
were marked
UNSAFE and requires
all assemblies be
signed by a
certificate or
asymmetric key with
a corresponding
login that has been
granted UNSAFE
ASSEMBLY
permission in the
master database. This
rule checks that CLR
is disabled.

VA1027 Untracked trusted High Assemblies marked SQL Server 2017+


assemblies should be as UNSAFE are
removed required to be signed SQL Managed
by a certificate or Instance
asymmetric key with
a corresponding
login that has been
granted UNSAFE
ASSEMBLY
permission in the
master database.
Trusted assemblies
may bypass this
requirement.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1044 Remote Admin Medium This rule checks that SQL Server 2012+
Connections should remote dedicated
be disabled unless admin connections SQL Managed
specifically required are disabled if they Instance
are not being used
for clustering to
reduce attack surface
area. SQL Server
provides a dedicated
administrator
connection (DAC).
The DAC lets an
administrator access
a running server to
execute diagnostic
functions or Transact-
SQL statements, or
to troubleshoot
problems on the
server and it
becomes an
attractive target to
attack when it is
enabled remotely.

VA1051 AUTO_CLOSE should Medium The AUTO_CLOSE SQL Server 2012+


be disabled on all option specifies
databases whether the
database shuts down
gracefully and frees
resources after the
last user disconnects.
Regardless of its
benefits it can cause
denial of service by
aggressively opening
and closing the
database, thus it is
important to keep
this feature disabled.
This rule checks that
this option is
disabled on the
current database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1066 Unused service Low Service Broker SQL Server 2012+


broker endpoints provides queuing
should be removed and reliable
messaging for SQL
Server. Service Broker
is used both for
applications that use
a single SQL Server
instance and
applications that
distribute work
across multiple
instances. Service
Broker endpoints
provide options for
transport security
and message
forwarding. This rule
enumerates all the
service broker
endpoints. Remove
those that are not
used.

VA1071 'Scan for startup Medium When 'Scan for SQL Server 2012+
stored procedures' startup procs' is
option should be enabled SQL Server
disabled scans for and runs all
automatically run
stored procedures
defined on the server.
If this option is
enabled SQL Server
scans for and runs all
automatically run
stored procedures
defined on the server.
This rule checks that
this option is
disabled.

VA1092 SQL Server instance Low SQL Server uses the SQL Server 2012+
shouldn't be SQL Server Browser
advertised by the service to enumerate
SQL Server Browser instances of the
service Database Engine
installed on the
computer. This
enables client
applications to
browse for a server
and helps clients
distinguish between
multiple instances of
the Database Engine
on the same
computer. This rule
checks that the SQL
instance is hidden.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1102 The Trustworthy bit High The TRUSTWORTHY SQL Server 2012+
should be disabled database property is
on all databases used to indicate SQL Managed
except MSDB whether the instance Instance
of SQL Server trusts
the database and the
contents within it. If
this option is enabled
database modules
(for example user-
defined functions or
stored procedures)
that use an
impersonation
context can access
resources outside the
database. This rule
verifies that the
TRUSTWORTHY bit is
disabled on all
databases except
MSDB.

VA1143 'dbo' user should not Medium The 'dbo' or database SQL Server 2012+
be used for normal owner is a user
service operation account that has SQL Managed
implied permissions Instance
to perform all
activities in the SQL Database
database. Members
of the sysadmin fixed Azure Synapse
server role are
automatically
mapped to dbo. This
rule checks that dbo
is not the only
account allowed to
access this database.
Note that on a newly
created clean
database this rule will
fail until additional
roles are created.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1144 Model database Medium The Model database SQL Server 2012+
should only be is used as the
accessible by 'dbo' template for all SQL Managed
databases created on Instance
the instance of SQL
Server. Modifications
made to the model
database such as
database size
recovery model and
other database
options are applied
to any databases
created afterward.
This rule checks that
dbo is the only
account allowed to
access the model
database.

VA1230 Filestream should be High FILESTREAM SQL Server 2012+


disabled integrates the SQL
Server Database
Engine with an NTFS
file system by storing
varbinary (max)
binary large object
(BLOB) data as files
on the file system.
Transact-SQL
statements can
insert, update, query,
search, and back up
FILESTREAM data.
Enabling Filestream
on SQL server
exposes additional
NTFS streaming API,
which increases its
attack surface and
makes it prone to
malicious attacks.
This rule checks that
Filestream is disabled.

VA1235 Server configuration Medium Disable the SQL Server 2012+


'Replication XPs' deprecated server
should be disabled configuration SQL Managed
'Replication XPs' to Instance
limit the attack
surface area. This is
an internal only
configuration setting.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1244 Orphaned users Medium A database user that SQL Server 2012+
should be removed exists on a database
from SQL server but has no SQL Managed
databases corresponding login Instance
in the master
database or as an
external resource (for
example, a Windows
user) is referred to as
an orphaned user
and it should either
be removed or
remapped to a valid
login. This rule checks
that there are no
orphaned users.

VA1245 The dbo information High There is redundant SQL Server 2012+
should be consistent information about
between the target the dbo identity for SQL Managed
DB and master any database: Instance
metadata stored in
the database itself
and metadata stored
in master DB. This
rule checks that this
information is
consistent between
the target DB and
master.

VA1247 There should be no High When SQL Server SQL Server 2012+
SPs marked as auto- has been configured
start to 'scan for startup
procs' the server will
scan master DB for
stored procedures
marked as auto-
start. This rule checks
that there are no SPs
marked as auto-
start.

VA1256 User CLR assemblies High CLR assemblies can SQL Server 2012+
should not be be used to execute
defined in the arbitrary code on SQL Managed
database SQL Server process. Instance
This rule checks that
there are no user-
defined CLR
assemblies in the
database.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA1277 Polybase network High PolyBase is a SQL Server 2016+


encryption should be technology that
enabled accesses and
combines both non-
relational and
relational data all
from within SQL
Server. Polybase
network encryption
option configures
SQL Server to
encrypt control and
data channels when
using Polybase. This
rule verifies that this
option is enabled.

VA1278 Create a baseline of Medium The SQL Server SQL Server 2012+
External Key Extensible Key
Management Management (EKM) SQL Managed
Providers enables third-party Instance
EKM / Hardware
Security Modules
(HSM) vendors to
register their
modules in SQL
Server. When
registered SQL
Server users can use
the encryption keys
stored on EKM
modules,this rule
displays a list of EKM
providers being used
in the system.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2062 Database-level High The Azure SQL SQL Database


firewall rules should Database-level
not grant excessive firewall helps protect Azure Synapse
access your data by
preventing all access
to your database
until you specify
which IP addresses
have permission.
Database-level
firewall rules grant
access to the specific
database based on
the originating IP
address of each
request. Database-
level firewall rules for
master and user
databases can only
be created and
managed through
Transact-SQL (unlike
server-level firewall
rules, which can also
be created and
managed using the
Azure portal or
PowerShell). For more
information, see
Azure SQL Database
and Azure Synapse
Analytics IP firewall
rules. This check
verifies that
database-level
firewall rules do not
grant access to more
than 255 IP
addresses.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2063 Server-level firewall High The Azure SQL SQL Database


rules should not server-level firewall
grant excessive helps protect your Azure Synapse
access server by preventing
all access to your
databases until you
specify which IP
addresses have
permission. Server-
level firewall rules
grant access to all
databases that
belong to the server
based on the
originating IP
address of each
request. Server-level
firewall rules can only
be created and
managed through
Transact-SQL as well
as through the Azure
portal or PowerShell.
For more
information, see
Azure SQL Database
and Azure Synapse
Analytics IP firewall
rules. This check
verifies that server-
level firewall rules do
not grant access to
more than 255 IP
addresses.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2064 Database-level High The Azure SQL SQL Database


firewall rules should Database-level
be tracked and firewall helps protect Azure Synapse
maintained at a strict your data by
minimum preventing all access
to your database
until you specify
which IP addresses
have permission.
Database-level
firewall rules grant
access to the specific
database based on
the originating IP
address of each
request. Database-
level firewall rules for
master and user
databases can only
be created and
managed through
Transact-SQL (unlike
server-level firewall
rules, which can also
be created and
managed using the
Azure portal or
PowerShell). For more
information, see
Azure SQL Database
and Azure Synapse
Analytics IP firewall
rules. This check
enumerates all the
database-level
firewall rules so that
any changes made to
them can be
identified and
addressed.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2065 Server-level firewall High The Azure SQL SQL Database


rules should be server-level firewall
tracked and helps protect your Azure Synapse
maintained at a strict data by preventing
minimum all access to your
databases until you
specify which IP
addresses have
permission. Server-
level firewall rules
grant access to all
databases that
belong to the server
based on the
originating IP
address of each
request. Server-level
firewall rules can be
created and
managed through
Transact-SQL as well
as through the Azure
portal or PowerShell.
For more
information, see
Azure SQL Database
and Azure Synapse
Analytics IP firewall
rules. This check
enumerates all the
server-level firewall
rules so that any
changes made to
them can be
identified and
addressed.

VA2111 Sample databases Low Microsoft SQL Server SQL Server 2012+
should be removed comes shipped with
several sample SQL Managed
databases. This rule Instance
checks whether the
sample databases
have been removed.

VA2120 Features that may High SQL Server is capable SQL Server 2012+
affect security should of providing a wide
be disabled range of features and SQL Managed
services. Some of the Instance
features and services
provided by default
may not be
necessary and
enabling them could
adversely affect the
security of the
system. This rule
checks that these
features are disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2121 'OLE Automation High SQL Server is capable SQL Server 2012+
Procedures' feature of providing a wide
should be disabled range of features and SQL Managed
services. Some of the Instance
features and services,
provided by default,
may not be
necessary, and
enabling them could
adversely affect the
security of the
system. The OLE
Automation
Procedures option
controls whether OLE
Automation objects
can be instantiated
within Transact-SQL
batches. These are
extended stored
procedures that allow
SQL Server users to
execute functions
external to SQL
Server. Regardless of
its benefits it can also
be used for exploits,
and is known as a
popular mechanism
to plant files on the
target machines. It is
advised to use
PowerShell as a
replacement for this
tool. This rule checks
that 'OLE
Automation
Procedures' feature is
disabled.
RUL E ID RUL E T IT L E RUL E SEVERIT Y RUL E DESC RIP T IO N P L AT F O RM

VA2122 'User Options' Medium SQL Server is capable SQL Server 2012+
feature should be of providing a wide
disabled range of features and SQL Managed
services. Some of the Instance
features and services
provided by default
may not be
necessary and
enabling them could
adversely affect the
security of the
system. The user
options specifies
global defaults for all
users. A list of default
query processing
options is established
for the duration of a
user's work session.
The user options
allows you to change
the default values of
the SET options (if
the server's default
settings are not
appropriate). This
rule checks that 'user
options' feature is
disabled.

VA2126 Extensibility-features Medium SQL Server provides SQL Server 2016+


that may affect a wide range of
security should be features and services.
disabled if not Some of the features
needed and services,
provided by default,
may not be
necessary, and
enabling them could
adversely affect the
security of the
system. This rule
checks that
configurations that
allow extraction of
data to an external
data source and the
execution of scripts
with certain remote
language extensions
are disabled.

Removed rules
RUL E ID RUL E T IT L E

VA1021 Global temporary stored procedures should be removed


RUL E ID RUL E T IT L E

VA1024 C2 Audit Mode should be enabled

VA1069 Permissions to select from system tables and views should


be revoked from non-sysadmins

VA1090 Ensure all Government Off The Shelf (GOTS) and Custom
Stored Procedures are encrypted

VA1103 Use only CLR with SAFE_ACCESS permission

VA1229 Filestream setting in registry and in SQL Server


configuration should match

VA1231 Filestream should be disabled (SQL)

VA1234 Common Criteria setting should be enabled

VA1252 List of events being audited and centrally managed via


server audit specifications.

VA1253 List of DB-scoped events being audited and centrally


managed via server audit specifications

VA1263 List all the active audits in the system

VA1266 The 'MUST_CHANGE' option should be set on all SQL logins

VA1276 Agent XPs feature should be disabled

VA1286 Database permissions shouldn't be granted directly to


principals (OBJECT or COLUMN)

VA2000 Minimal set of principals should be granted high impact


database-scoped permissions

VA2001 Minimal set of principals should be granted high impact


database-scoped permissions on objects or columns

VA2002 Minimal set of principals should be granted high impact


database-scoped permissions on various securables

VA2010 Minimal set of principals should be granted medium impact


database-scoped permissions

VA2021 Minimal set of principals should be granted database-scoped


ALTER permissions on objects or columns

VA2022 Minimal set of principals should be granted database-scoped


ALTER permission on various securables

VA2030 Minimal set of principals should be granted database-scoped


SELECT or EXECUTE permissions
RUL E ID RUL E T IT L E

VA2031 Minimal set of principals should be granted database-scoped


SELECT

VA2032 Minimal set of principals should be granted database-scoped


SELECT or EXECUTE permissions on schema

VA2034 Minimal set of principals should be granted database-scoped


EXECUTE permission on XML Schema Collection

VA2040 Minimal set of principals should be granted low impact


database-scoped permissions

VA2041 Minimal set of principals should be granted low impact


database-scoped permissions on objects or columns

VA2042 Minimal set of principals should be granted low impact


database-scoped permissions on schema

VA2050 Minimal set of principals should be granted database-scoped


VIEW DEFINITION permissions

VA2051 Minimal set of principals should be granted database-scoped


VIEW DEFINITION permissions on objects or columns

VA2052 Minimal set of principals should be granted database-scoped


VIEW DEFINITION permission on various securables

VA2100 Minimal set of principals should be granted high impact


server-scoped permissions

VA2101 Minimal set of principals should be granted medium impact


server-scoped permissions

VA2102 Minimal set of principals should be granted low impact


server-scoped permissions

VA2104 Execute permissions on extended stored procedures should


be revoked from PUBLIC

VA2105 Login password should not be easily guessed

VA2112 Permissions from PUBLIC for Data Transformation Services


(DTS) should be revoked

VA2115 Minimal set of principals should be members of medium


impact fixed server roles

VA2123 'Remote Access' feature should be disabled

VA2127 'External Scripts' feature should be disabled

Next steps
Vulnerability Assessment
SQL Vulnerability Assessment rules changelog
SQL Vulnerability assessment rules changelog
7/12/2022 • 4 minutes to read • Edit Online

This article details the changes made to the SQL Vulnerability Assessment service rules. Rules that are updated,
removed, or added will be outlined below. For an updated list of SQL Vulnerability assessment rules, see SQL
Vulnerability Assessment rules.

June 2022
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA2129 Changes to signed modules should be Logic change


authorized

VA1219 Transparent data encryption should be Logic change


enabled

VA1047 Password expiration check should be Logic change


enabled for all SQL logins

January 2022
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA1288 Sensitive data columns should be Removed rule


classified

VA1054 Minimal set of principals should be Logic change


members of fixed high impact
database roles

VA1220 Database communication using TDS Logic change


should be protected through TLS

VA2120 Features that may affect security Logic change


should be disabled

VA2129 Changes to signed modules should be Logic change


authorized

June 2021
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA1220 Database communication using TDS Logic change


should be protected through TLS
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA2108 Minimal set of principals should be Logic change


members of fixed high impact
database roles

December 2020
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA1017 Execute permissions on xp_cmdshell Title and description change


from all users (except dbo) should be
revoked

VA1021 Global temporary stored procedures Removed rule


should be removed

VA1024 C2 Audit Mode should be enabled Removed rule

VA1042 Database ownership chaining should Description change


be disabled for all databases except for
master , msdb , and tempdb

VA1044 Remote Admin Connections should be Title and description change


disabled unless specifically required

VA1047 Password expiration check should be Title and description change


enabled for all SQL logins

VA1051 AUTO_CLOSE should be disabled on all Description change


databases

VA1053 Account with default name 'sa' should Description change


be renamed or disabled

VA1067 Database Mail XPs should be disabled Title and description change
when it is not in use

VA1068 Server permissions shouldn't be Logic change


granted directly to principals

VA1069 Permissions to select from system Removed rule


tables and views should be revoked
from non-sysadmins

VA1090 Ensure all Government Off The Shelf Removed rule


(GOTS) and Custom Stored Procedures
are encrypted

VA1091 Auditing of both successful and failed Description change


login attempts (default trace) should
be enabled when 'Login auditing' is set
up to track logins
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA1098 Any Existing SSB or Mirroring endpoint Logic change


should require AES connection

VA1103 Use only CLR with SAFE_ACCESS Removed rule


permission

VA1219 Transparent data encryption should be Description change


enabled

VA1229 Filestream setting in registry and in Removed rule


SQL Server configuration should
match

VA1230 Filestream should be disabled Description change

VA1231 Filestream should be disabled (SQL) Removed rule

VA1234 Common Criteria setting should be Removed rule


enabled

VA1235 Replication XPs should be disabled Title, description, and Logic change

VA1252 List of events being audited and Removed rule


centrally managed via server audit
specifications.

VA1253 List of DB-scoped events being Removed rule


audited and centrally managed via
server audit specifications.

VA1263 List all the active audits in the system Removed rule

VA1264 Auditing of both successful and failed Description change


login attempts should be enabled

VA1266 The 'MUST_CHANGE' option should be Removed rule


set on all SQL logins

VA1276 Agent XPs feature should be disabled Removed rule

VA1281 All memberships for user-defined roles Logic change


should be intended

VA1282 Orphan roles should be removed Logic change

VA1286 Database permissions shouldn't be Removed rule


granted directly to principals (OBJECT
or COLUMN)

VA1288 Sensitive data columns should be Description change


classified
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA2030 Minimal set of principals should be Removed rule


granted database-scoped SELECT or
EXECUTE permissions

VA2033 Minimal set of principals should be Description change


granted database-scoped EXECUTE
permission on objects or columns

VA2062 Database-level firewall rules should not Description change


grant excessive access

VA2063 Server-level firewall rules should not Description change


grant excessive access

VA2100 Minimal set of principals should be Removed rule


granted high impact server-scoped
permissions

VA2101 Minimal set of principals should be Removed rule


granted medium impact server-scoped
permissions

VA2102 Minimal set of principals should be Removed rule


granted low impact server-scoped
permissions

VA2103 Unnecessary execute permissions on Logic change


extended stored procedures should be
revoked

VA2104 Execute permissions on extended Removed rule


stored procedures should be revoked
from PUBLIC

VA2105 Login password should not be easily Removed rule


guessed

VA2108 Minimal set of principals should be Logic change


members of fixed high impact
database roles

VA2111 Sample databases should be removed Logic change

VA2112 Permissions from PUBLIC for Data Removed rule


Transformation Services (DTS) should
be revoked

VA2113 Data Transformation Services (DTS) Description and logic change


permissions should only be granted to
SSIS roles

VA2114 Minimal set of principals should be Logic change


members of high impact fixed server
roles
RUL E ID RUL E T IT L E C H A N GE DETA IL S

VA2115 Minimal set of principals should be Removed rule


members of medium impact fixed
server roles

VA2120 Features that may affect security Logic change


should be disabled

VA2121 'OLE Automation Procedures' feature Title and description change


should be disabled

VA2123 'Remote Access' feature should be Removed rule


disabled

VA2126 Features that may affect security Title, description, and logic change
should be disabled

VA2127 'External Scripts' feature should be Removed rule


disabled

VA2129 Changes to signed modules should be Platform update


authorized

VA2130 Track all users with access to the Description and logic change
database

Next steps
SQL Vulnerability Assessment rules
SQL Vulnerability Assessment overview
Store Vulnerability Assessment scan results in a storage account accessible behind firewalls and VNets
Store Vulnerability Assessment scan results in a
storage account accessible behind firewalls and
VNets
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
If you are limiting access to your storage account in Azure for certain VNets or services, you'll need to enable
the appropriate configuration so that Vulnerability Assessment (VA) scanning for SQL Databases or Managed
Instances have access to that storage account.

Prerequisites
The SQL Vulnerability Assessment service needs permission to the storage account to save baseline and scan
results. There are three methods:
Use Storage Account key : Azure creates the SAS key and saves it (though we don't save the account key)
Use Storage SAS key : The SAS key must have: Write | List | Read | Delete permissions
Use SQL Ser ver managed identity : The SQL Server must have a managed identity. The storage account
must have a role assignment for the SQL Managed Identity as Storage Blob Data Contributor. When you
apply the settings, the VA fields storageContainerSasKey and storageAccountAccessKey must be empty.
When storage is behind a firewall or virtual network, then the SQL managed identity is required.
When you use the Azure portal to save SQL VA settings, Azure checks if you have permission to assign a new
role assignment for the managed identity as Storage Blob Data Contributor on the storage. If permissions are
assigned, Azure uses SQL Server managed identity, otherwise Azure uses the key method.

Enable Azure SQL Database VA scanning access to the storage


account
If you have configured your VA storage account to only be accessible by certain networks or services, you'll
need to ensure that VA scans for your Azure SQL Database are able to store the scans on the storage account.
You can use the existing storage account, or create a new storage account to store VA scan results for all
databases on your logical SQL server.

NOTE
The vulnerability assessment service can't access storage accounts protected with firewalls or VNets if they require storage
access keys.

Go to your Resource group that contains the storage account and access the Storage account pane. Under
Settings , select Firewall and vir tual networks .
Ensure that Allow trusted Microsoft ser vices access to this storage account is checked.
To find out which storage account is being used, go to your SQL ser ver pane in the Azure portal, under
Security , and then select Defender for Cloud .
NOTE
You can set up email alerts to notify users in your organization to view or access the scan reports. To do this, ensure that
you have SQL Security Manager and Storage Blob Data Reader permissions.

Store VA scan results for Azure SQL Managed Instance in a storage


account that can be accessed behind a firewall or VNet
Since Managed Instance is not a trusted Microsoft Service and has a different VNet from the storage account,
executing a VA scan will result in an error.
To support VA scans on Managed Instances, follow the below steps:
1. In the SQL managed instance pane, under the Over view heading, click the Vir tual network/subnet
link. This takes you to the Vir tual network pane.
2. Under Settings , select Subnets . Click Subnet in the new pane to add a subnet, and delegate it to
Microsoft.Sql\managedInstance. For more information, see Manage subnets.

3. In your Vir tual network pane, under Settings , select Ser vice endpoints . Click Add in the new pane,
and add the Microsoft.Storage Service as a new service endpoint. Make sure the ManagedInstance
Subnet is selected. Click Add .

4. Go to your Storage account that you've selected to store your VA scans. Under Settings , select
Firewall and vir tual networks . Click on Add existing vir tual network . Select your managed
instance virtual network and subnet, and click Add .
You should now be able to store your VA scans for Managed Instances in your storage account.

Troubleshoot vulnerability assessment scan-related issues


Troubleshoot common issues related to vulnerability assessment scans.
Failure to save vulnerability assessment settings
You might not be able to save changes to vulnerability assessment settings if your storage account doesn't meet
some prerequisites or if you have insufficient permissions.
Storage account requirements
The storage account in which vulnerability assessment scan results are saved must meet the following
requirements:
Type : StorageV2 (General Purpose V2) or Storage (General Purpose V1)
Performance : Standard (only)
Region : The storage must be in the same region as the instance of Azure SQL Server.
If any of these requirements aren't met, saving changes to vulnerability assessment settings fails.
Permissions
The following permissions are required to save changes to vulnerability assessment settings:
SQL Security Manager
Storage Blob Data Reader
Owner role on the storage account
Setting a new role assignment requires owner or user administrator access to the storage account and the
following permissions:
Storage Blob Data Owner
Storage account isn't visible for selection in vulnerability assessment settings
The storage account might not appear in the storage account picker for several reasons:
The storage account you're looking for isn't in the selected subscription.
The storage account you're looking for isn't in the same region as the instance of Azure SQL Server.
You don't have Microsoft.Storage/storageAccounts/read permissions on the storage account.
Failure to open an email link for scan results or can't view scan results
You might not be able to open a link in a notification email about scan results or to view scan results if you don't
have the required permissions or if you use a browser that doesn't support opening or displaying scan results.
Permissions
The following permissions are required to open links in email notifications about scan results or to view scan
results:
SQL Security Manager
Storage Blob Data Reader
Browser requirements
The Firefox browser doesn't support opening or displaying scan results view. We recommend that you use
Chrome or Microsoft Edge to view vulnerability assessment scan results.

Next steps
Vulnerability Assessment
Create an Azure Storage account
Microsoft Defender for SQL
Authorize database access to SQL Database, SQL
Managed Instance, and Azure Synapse Analytics
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this article, you learn about:
Options for configuring Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics to
enable users to perform administrative tasks and to access the data stored in these databases.
The access and authorization configuration after initially creating a new server.
How to add logins and user accounts in the master database and user accounts and then grant these
accounts administrative permissions.
How to add user accounts in user databases, either associated with logins or as contained user accounts.
Configure user accounts with permissions in user databases by using database roles and explicit
permissions.

IMPORTANT
Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the
remainder of this article as databases, and the server is referring to the server that manages databases for Azure SQL
Database and Azure Synapse.

Authentication and authorization


Authentication is the process of proving the user is who they claim to be. A user connects to a database using
a user account. When a user attempts to connect to a database, they provide a user account and authentication
information. The user is authenticated using one of the following two authentication methods:
SQL authentication.
With this authentication method, the user submits a user account name and associated password to
establish a connection. This password is stored in the master database for user accounts linked to a login
or stored in the database containing the user accounts not linked to a login.
Azure Active Directory Authentication
With this authentication method, the user submits a user account name and requests that the service use
the credential information stored in Azure Active Directory (Azure AD).
Logins and users : A user account in a database can be associated with a login that is stored in the master
database or can be a user name that is stored in an individual database.
A login is an individual account in the master database, to which a user account in one or more databases
can be linked. With a login, the credential information for the user account is stored with the login.
A user account is an individual account in any database that may be, but does not have to be, linked to a
login. With a user account that is not linked to a login, the credential information is stored with the user
account.
Authorization to access data and perform various actions are managed using database roles and explicit
permissions. Authorization refers to the permissions assigned to a user, and determines what that user is
allowed to do. Authorization is controlled by your user account's database role memberships and object-level
permissions. As a best practice, you should grant users the least privileges necessary.

Existing logins and user accounts after creating a new database


When you first deploy Azure SQL, you specify an admin login and an associated password for that login. This
administrative account is called Ser ver admin . The following configuration of logins and users in the master
and user databases occurs during deployment:
A SQL login with administrative privileges is created using the login name you specified. A login is an
individual user account for logging in to SQL Database, SQL Managed Instance, and Azure Synapse.
This login is granted full administrative permissions on all databases as a server-level principal. The login has
all available permissions and can't be limited. In a SQL Managed Instance, this login is added to the sysadmin
fixed server role (this role does not exist in Azure SQL Database).
A user account called dbo is created for this login in each user database. The dbo user has all database
permissions in the database and is mapped to the db_owner fixed database role. Additional fixed database
roles are discussed later in this article.
To identify the administrator accounts for a database, open the Azure portal, and navigate to the Proper ties tab
of your server or managed instance.
IMPORTANT
The admin login name can't be changed after it has been created. To reset the password for the server admin, go to the
Azure portal, click SQL Ser vers , select the server from the list, and then click Reset Password . To reset the password for
the SQL Managed Instance, go to the Azure portal, click the instance, and click Reset password . You can also use
PowerShell or the Azure CLI.

Create additional logins and users having administrative permissions


At this point, your server or managed instance is only configured for access using a single SQL login and user
account. To create additional logins with full or partial administrative permissions, you have the following
options (depending on your deployment mode):
Create an Azure Active Director y administrator account with full administrative permissions
Enable Azure Active Directory authentication and create an Azure AD administrator login. One Azure
Active Directory account can be configured as an administrator of the Azure SQL deployment with full
administrative permissions. This account can be either an individual or security group account. An Azure
AD administrator must be configured if you want to use Azure AD accounts to connect to SQL Database,
SQL Managed Instance, or Azure Synapse. For detailed information on enabling Azure AD authentication
for all Azure SQL deployment types, see the following articles:
Use Azure Active Directory authentication for authentication with SQL
Configure and manage Azure Active Directory authentication with SQL
In SQL Managed Instance, create SQL logins with full administrative permissions
Create an additional SQL login in the master database.
Add the login to the sysadmin fixed server role using the ALTER SERVER ROLE statement. This login
will have full administrative permissions.
Alternatively, create an Azure AD login using the CREATE LOGIN syntax.
In SQL Database, create SQL logins with limited administrative permissions
Create an additional SQL login in the master database.
Create a user account in the master database associated with this new login.
Add the user account to the dbmanager , the loginmanager role, or both in the master database using
the ALTER ROLE statement (for Azure Synapse, use the sp_addrolemember statement).

NOTE
dbmanager and loginmanager roles do not pertain to SQL Managed Instance deployments.

Members of these special master database roles for Azure SQL Database have authority to create and
manage databases or to create and manage logins. In databases created by a user that is a member of the
dbmanager role, the member is mapped to the db_owner fixed database role and can log into and
manage that database using the dbo user account. These roles have no explicit permissions outside of
the master database.

IMPORTANT
You can't create an additional SQL login with full administrative permissions in SQL Database.

Create accounts for non-administrator users


You can create accounts for non-administrative users using one of two methods:
Create a login
Create a SQL login in the master database. Then create a user account in each database to which that user
needs access and associate the user account with that login. This approach is preferred when the user
must access multiple databases and you wish to keep the passwords synchronized. However, this
approach has complexities when used with geo-replication as the login must be created on both the
primary server and the secondary server(s). For more information, see Configure and manage Azure SQL
Database security for geo-restore or failover.
Create a user account
Create a user account in the database to which a user needs access (also called a contained user).
With SQL Database, you can always create this type of user account.
With SQL Managed Instance supporting Azure AD server principals, you can create user accounts to
authenticate to the SQL Managed Instance without requiring database users to be created as a
contained database user.
With this approach, the user authentication information is stored in each database, and replicated to geo-
replicated databases automatically. However, if the same account exists in multiple databases and you are
using Azure SQL Authentication, you must keep the passwords synchronized manually. Additionally, if a
user has an account in different databases with different passwords, remembering those passwords can
become a problem.
IMPORTANT
To create contained users mapped to Azure AD identities, you must be logged in using an Azure AD account in the
database in Azure SQL Database. In SQL Managed Instance, a SQL login with sysadmin permissions can also create an
Azure AD login or user.

For examples showing how to create logins and users, see:


Create login for Azure SQL Database
Create login for Azure SQL Managed Instance
Create login for Azure Synapse
Create user
Creating Azure AD contained users

TIP
For a security tutorial that includes creating users in Azure SQL Database, see Tutorial: Secure Azure SQL Database.

Using fixed and custom database roles


After creating a user account in a database, either based on a login or as a contained user, you can authorize that
user to perform various actions and to access data in a particular database. You can use the following methods
to authorize access:
Fixed database roles
Add the user account to a fixed database role. There are 9 fixed database roles, each with a defined set of
permissions. The most common fixed database roles are: db_owner , db_ddladmin , db_datawriter ,
db_datareader , db_denydatawriter , and db_denydatareader . db_owner is commonly used to grant
full permission to only a few users. The other fixed database roles are useful for getting a simple database
in development quickly, but are not recommended for most production databases. For example, the
db_datareader fixed database role grants read access to every table in the database, which is more than
is strictly necessary.
To add a user to a fixed database role:
In Azure SQL Database, use the ALTER ROLE statement. For examples, see ALTER ROLE
examples
Azure Synapse, use the sp_addrolemember statement. For examples, see sp_addrolemember
examples.
Custom database role
Create a custom database role using the CREATE ROLE statement. A custom role enables you to create
your own user-defined database roles and carefully grant each role the least permissions necessary for
the business need. You can then add users to the custom role. When a user is a member of multiple roles,
they aggregate the permissions of them all.
Grant permissions directly
Grant the user account permissions directly. There are over 100 permissions that can be individually
granted or denied in SQL Database. Many of these permissions are nested. For example, the UPDATE
permission on a schema includes the UPDATE permission on each table within that schema. As in most
permission systems, the denial of a permission overrides a grant. Because of the nested nature and the
number of permissions, it can take careful study to design an appropriate permission system to properly
protect your database. Start with the list of permissions at Permissions (Database Engine) and review the
poster size graphic of the permissions.

Using groups
Efficient access management uses permissions assigned to Active Directory security groups and fixed or custom
roles instead of to individual users.
When using Azure Active Directory authentication, put Azure Active Directory users into an Azure Active
Directory security group. Create a contained database user for the group. Add one or more database
users as a member to custom or builtin database roles with the specific permissions appropriate to that
group of users.
When using SQL authentication, create contained database users in the database. Place one or more
database users into a custom database role with specific permissions appropriate to that group of users.

NOTE
You can also use groups for non-contained database users.

You should familiarize yourself with the following features that can be used to limit or elevate permissions:
Impersonation and module-signing can be used to securely elevate permissions temporarily.
Row-Level Security can be used limit which rows a user can access.
Data Masking can be used to limit exposure of sensitive data.
Stored procedures can be used to limit the actions that can be taken on the database.

Next steps
For an overview of all Azure SQL Database and SQL Managed Instance security features, see Security overview.
Use Azure Active Directory authentication
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure Active Directory (Azure AD) authentication is a mechanism for connecting to Azure SQL Database, Azure
SQL Managed Instance, and Synapse SQL in Azure Synapse Analytics by using identities in Azure AD.

NOTE
This article applies to Azure SQL Database, SQL Managed Instance, and Azure Synapse Analytics.

With Azure AD authentication, you can centrally manage the identities of database users and other Microsoft
services in one central location. Central ID management provides a single place to manage database users and
simplifies permission management. Benefits include the following:
It provides an alternative to SQL Server authentication.
It helps stop the proliferation of user identities across servers.
It allows password rotation in a single place.
Customers can manage database permissions using external (Azure AD) groups.
It can eliminate storing passwords by enabling integrated Windows authentication and other forms of
authentication supported by Azure Active Directory.
Azure AD authentication uses contained database users to authenticate identities at the database level.
Azure AD supports token-based authentication for applications connecting to SQL Database and SQL
Managed Instance.
Azure AD authentication supports:
Azure AD cloud-only identities.
Azure AD hybrid identities that support:
Cloud authentication with two options coupled with seamless single sign-on (SSO) Pass-
through authentication and password hash authentication.
Federated authentication.
For more information on Azure AD authentication methods and which one to choose, see the
following article:
Choose the right authentication method for your Azure Active Directory hybrid identity
solution
Azure AD supports connections from SQL Server Management Studio that use Active Directory Universal
Authentication, which includes Multi-Factor Authentication. Multi-Factor Authentication includes strong
authentication with a range of easy verification options — phone call, text message, smart cards with pin,
or mobile app notification. For more information, see SSMS support for Azure AD Multi-Factor
Authentication with Azure SQL Database, SQL Managed Instance, and Azure Synapse
Azure AD supports similar connections from SQL Server Data Tools (SSDT) that use Active Directory
Interactive Authentication. For more information, see Azure Active Directory support in SQL Server Data
Tools (SSDT)
NOTE
Connecting to a SQL Server instance that's running on an Azure virtual machine (VM) is not supported using Azure
Active Directory or Azure Active Directory Domain Services. Use an Active Directory domain account instead.

The configuration steps include the following procedures to configure and use Azure Active Directory
authentication.
1. Create and populate Azure AD.
2. Optional: Associate or change the active directory that is currently associated with your Azure Subscription.
3. Create an Azure Active Directory administrator.
4. Configure your client computers.
5. Create contained database users in your database mapped to Azure AD identities.
6. Connect to your database by using Azure AD identities.

NOTE
To learn how to create and populate Azure AD, and then configure Azure AD with Azure SQL Database, SQL Managed
Instance, and Synapse SQL in Azure Synapse Analytics, see Configure Azure AD with Azure SQL Database.

Trust architecture
Only the cloud portion of Azure AD, SQL Database, SQL Managed Instance, and Azure Synapse is considered
to support Azure AD native user passwords.
To support Windows single sign-on credentials (or user/password for Windows credential), use Azure Active
Directory credentials from a federated or managed domain that is configured for seamless single sign-on for
pass-through and password hash authentication. For more information, see Azure Active Directory Seamless
Single Sign-On.
To support Federated authentication (or user/password for Windows credentials), the communication with
ADFS block is required.
For more information on Azure AD hybrid identities, the setup, and synchronization, see the following articles:
Password hash authentication - Implement password hash synchronization with Azure AD Connect sync
Pass-through authentication - Azure Active Directory Pass-through Authentication
Federated authentication - Deploying Active Directory Federation Services in Azure and Azure AD Connect
and federation
For a sample federated authentication with ADFS infrastructure (or user/password for Windows credentials), see
the diagram below. The arrows indicate communication pathways.
The following diagram indicates the federation, trust, and hosting relationships that allow a client to connect to a
database by submitting a token. The token is authenticated by an Azure AD, and is trusted by the database.
Customer 1 can represent an Azure Active Directory with native users or an Azure AD with federated users.
Customer 2 represents a possible solution including imported users, in this example coming from a federated
Azure Active Directory with ADFS being synchronized with Azure Active Directory. It's important to understand
that access to a database using Azure AD authentication requires that the hosting subscription is associated to
the Azure AD. The same subscription must be used to create the Azure SQL Database, SQL Managed Instance, or
Azure Synapse resources.

Administrator structure
When using Azure AD authentication, there are two Administrator accounts: the original Azure SQL Database
administrator and the Azure AD administrator. The same concepts apply to Azure Synapse. Only the
administrator based on an Azure AD account can create the first Azure AD contained database user in a user
database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the
administrator is a group account, it can be used by any group member, enabling multiple Azure AD
administrators for the server. Using group account as an administrator enhances manageability by allowing you
to centrally add and remove group members in Azure AD without changing the users or permissions in SQL
Database or Azure Synapse. Only one Azure AD administrator (a user or group) can be configured at any time.

Permissions
To create new users, you must have the ALTER ANY USER permission in the database. The ALTER ANY USER
permission can be granted to any database user. The ALTER ANY USER permission is also held by the server
administrator accounts, and database users with the CONTROL ON DATABASE or ALTER ON DATABASE permission for
that database, and by members of the db_owner database role.
To create a contained database user in Azure SQL Database, SQL Managed Instance, or Azure Synapse, you must
connect to the database or instance using an Azure AD identity. To create the first contained database user, you
must connect to the database by using an Azure AD administrator (who is the owner of the database). This is
demonstrated in Configure and manage Azure Active Directory authentication with SQL Database or Azure
Synapse. Azure AD authentication is only possible if the Azure AD admin was created for Azure SQL Database,
SQL Managed Instance, or Azure Synapse. If the Azure Active Directory admin was removed from the server,
existing Azure Active Directory users created previously inside SQL Server can no longer connect to the
database using their Azure Active Directory credentials.

Azure AD features and limitations


The following members of Azure AD can be provisioned for Azure SQL Database:
Native members: A member created in Azure AD in the managed domain or in a customer domain.
For more information, see Add your own domain name to Azure AD.
Members of an Active Directory domain federated with Azure Active Directory on a managed domain
configured for seamless single sign-on with pass-through or password hash authentication. For more
information, see Microsoft Azure now supports federation with Windows Server Active Directory and
Azure Active Directory Seamless Single Sign-On.
Imported members from other Azure AD's who are native or federated domain members.
Active Directory groups created as security groups.
Azure AD users that are part of a group that has db_owner server role cannot use the CREATE
DATABASE SCOPED CREDENTIAL syntax against Azure SQL Database and Azure Synapse. You will see
the following error:
SQL Error [2760] [S0001]: The specified schema name 'user@mydomain.com' either does not exist or you
do not have permission to use it.

Grant the db_owner role directly to the individual Azure AD user to mitigate the CREATE DATABASE
SCOPED CREDENTIAL issue.
These system functions return NULL values when executed under Azure AD principals:
SUSER_ID()
SUSER_NAME(<admin ID>)
SUSER_SNAME(<admin SID>)
SUSER_ID(<admin name>)
SUSER_SID(<admin name>)

SQL Managed Instance


Azure AD server principals (logins) and users are supported for SQL Managed Instance.
Setting Azure AD server principals (logins) mapped to an Azure AD group as database owner is not
supported in SQL Managed Instance.
An extension of this is that when a group is added as part of the dbcreator server role, users from
this group can connect to the SQL Managed Instance and create new databases, but will not be able to
access the database. This is because the new database owner is SA, and not the Azure AD user. This
issue does not manifest if the individual user is added to the dbcreator server role.
SQL Agent management and jobs execution are supported for Azure AD server principals (logins).
Database backup and restore operations can be executed by Azure AD server principals (logins).
Auditing of all statements related to Azure AD server principals (logins) and authentication events is
supported.
Dedicated administrator connection for Azure AD server principals (logins) which are members of sysadmin
server role is supported.
Supported through SQLCMD Utility and SQL Server Management Studio.
Logon triggers are supported for logon events coming from Azure AD server principals (logins).
Service Broker and DB mail can be setup using an Azure AD server principal (login).

Connect by using Azure AD identities


Azure Active Directory authentication supports the following methods of connecting to a database using Azure
AD identities:
Azure Active Directory Password
Azure Active Directory Integrated
Azure Active Directory Universal with Multi-Factor Authentication
Using Application token authentication
The following authentication methods are supported for Azure AD server principals (logins):
Azure Active Directory Password
Azure Active Directory Integrated
Azure Active Directory Universal with Multi-Factor Authentication
Additional considerations
To enhance manageability, we recommend you provision a dedicated Azure AD group as an administrator.
Only one Azure AD administrator (a user or group) can be configured for a server in SQL Database or Azure
Synapse at any time.
The addition of Azure AD server principals (logins) for SQL Managed Instance allows the possibility of
creating multiple Azure AD server principals (logins) that can be added to the sysadmin role.
Only an Azure AD administrator for the server can initially connect to the server or managed instance using
an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD
database users.
Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure
AD security groups are not supported to login into the database in SQL Database, Managed Instance, or
Azure Synapse.
We recommend setting the connection timeout to 30 seconds.
SQL Server 2016 Management Studio and SQL Server Data Tools for Visual Studio 2015 (version
14.0.60311.1April 2016 or later) support Azure Active Directory authentication. (Azure AD authentication is
supported by the .NET Framework Data Provider for SqlSer ver ; at least version .NET Framework 4.6).
Therefore the newest versions of these tools and data-tier applications (DAC and BACPAC) can use Azure AD
authentication.
Beginning with version 15.0.1, sqlcmd utility and bcp utility support Active Directory Interactive
authentication with Multi-Factor Authentication.
SQL Server Data Tools for Visual Studio 2015 requires at least the April 2016 version of the Data Tools
(version 14.0.60311.1). Currently, Azure AD users are not shown in SSDT Object Explorer. As a workaround,
view the users in sys.database_principals.
Microsoft JDBC Driver 6.0 for SQL Server supports Azure AD authentication. Also, see Setting the Connection
Properties.
PolyBase cannot authenticate by using Azure AD authentication.
Azure AD authentication is supported for Azure SQL Database and Azure Synapse by using the Azure portal
Impor t Database and Expor t Database blades. Import and export using Azure AD authentication is also
supported from a PowerShell command.
Azure AD authentication is supported for SQL Database, SQL Managed Instance, and Azure Synapse with
using the CLI. For more information, see Configure and manage Azure AD authentication with SQL Database
or Azure Synapse and SQL Server - az sql server.

Next steps
To learn how to create and populate an Azure AD instance and then configure it with Azure SQL Database,
SQL Managed Instance, or Azure Synapse, see Configure and manage Azure Active Directory authentication
with SQL Database, SQL Managed Instance, or Azure Synapse.
For a tutorial of using Azure AD server principals (logins) with SQL Managed Instance, see Azure AD server
principals (logins) with SQL Managed Instance
For an overview of logins, users, database roles, and permissions in SQL Database, see Logins, users,
database roles, and permissions.
For more information about database principals, see Principals.
For more information about database roles, see Database roles.
For syntax on creating Azure AD server principals (logins) for SQL Managed Instance, see CREATE LOGIN.
For more information about firewall rules in SQL Database, see SQL Database firewall rules.
Configure and manage Azure AD authentication
with Azure SQL
7/12/2022 • 26 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article shows you how to create and populate an Azure Active Directory (Azure AD) instance, and then use
Azure AD with Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics. For an
overview, see Azure Active Directory authentication.

Azure AD authentication methods


Azure AD authentication supports the following authentication methods:
Azure AD cloud-only identities
Azure AD hybrid identities that support:
Cloud authentication with two options coupled with seamless single sign-on (SSO)
Azure AD password hash authentication
Azure AD pass-through authentication
Federated authentication
For more information on Azure AD authentication methods, and which one to choose, see Choose the right
authentication method for your Azure Active Directory hybrid identity solution.
For more information on Azure AD hybrid identities, setup, and synchronization, see:
Password hash authentication - Implement password hash synchronization with Azure AD Connect sync
Pass-through authentication - Azure Active Directory Pass-through Authentication
Federated authentication - Deploying Active Directory Federation Services in Azure and Azure AD Connect
and federation

Create and populate an Azure AD instance


Create an Azure AD instance and populate it with users and groups. Azure AD can be the initial Azure AD
managed domain. Azure AD can also be an on-premises Active Directory Domain Services that is federated with
the Azure AD.
For more information, see:
Integrating your on-premises identities with Azure Active Directory
Add your own domain name to Azure AD
Microsoft Azure now supports federation with Windows Server Active Directory
What is Azure Active Directory?
Manage Azure AD using Windows PowerShell
Hybrid Identity Required Ports and Protocols.

Associate or add an Azure subscription to Azure Active Directory


1. Associate your Azure subscription to Azure Active Directory by making the directory a trusted directory
for the Azure subscription hosting the database. For details, see Associate or add an Azure subscription to
your Azure Active Directory tenant.
2. Use the directory switcher in the Azure portal to switch to the subscription associated with domain.

IMPORTANT
Every Azure subscription has a trust relationship with an Azure AD instance. This means that it trusts that
directory to authenticate users, services, and devices. Multiple subscriptions can trust the same directory, but a
subscription trusts only one directory. This trust relationship that a subscription has with a directory is unlike the
relationship that a subscription has with all other resources in Azure (websites, databases, and so on), which are
more like child resources of a subscription. If a subscription expires, then access to those other resources
associated with the subscription also stops. But the directory remains in Azure, and you can associate another
subscription with that directory and continue to manage the directory users. For more information about
resources, see Understanding resource access in Azure. To learn more about this trusted relationship see How to
associate or add an Azure subscription to Azure Active Directory.

Azure AD admin with a server in SQL Database


Each server in Azure (which hosts SQL Database or Azure Synapse) starts with a single server administrator
account that is the administrator of the entire server. Create a second administrator account as an Azure AD
account. This principal is created as a contained database user in the master database of the server.
Administrator accounts are members of the db_owner role in every user database, and enter each user
database as the dbo user. For more information about administrator accounts, see Managing Databases and
Logins.
When using Azure Active Directory with geo-replication, the Azure Active Directory administrator must be
configured for both the primary and the secondary servers. If a server does not have an Azure Active Directory
administrator, then Azure Active Directory logins and users receive a Cannot connect to server error.

NOTE
Users that are not based on an Azure AD account (including the server administrator account) cannot create Azure AD-
based users, because they do not have permission to validate proposed database users with the Azure AD.

Provision Azure AD admin (SQL Managed Instance)


IMPORTANT
Only follow these steps if you are provisioning an Azure SQL Managed Instance. This operation can only be executed by
Global Administrator or a Privileged Role Administrator in Azure AD.
In public preview , you can assign the Director y Readers role to a group in Azure AD. The group owners can then add
the managed instance identity as a member of this group, which would allow you to provision an Azure AD admin for the
SQL Managed Instance. For more information on this feature, see Directory Readers role in Azure Active Directory for
Azure SQL.

Your SQL Managed Instance needs permissions to read Azure AD to successfully accomplish tasks such as
authentication of users through security group membership or creation of new users. For this to work, you need
to grant the SQL Managed Instance permission to read Azure AD. You can do this using the Azure portal or
PowerShell.
Azure portal
To grant your SQL Managed Instance Azure AD read permission using the Azure portal, log in as Global
Administrator in Azure AD and follow these steps:
1. In the Azure portal, in the upper-right corner select your account, and then choose Switch directories to
confirm which Active Directory is currently your active directory. Switch directories, if necessary.

2. Choose the correct Active Directory as the default Azure AD.


This step links the subscription associated with Active Directory to the SQL Managed Instance, making
sure that the same subscription is used for both the Azure AD instance and the SQL Managed Instance.
3. Navigate to the SQL Managed Instance you want to use for Azure AD integration.

4. Select the banner on top of the Active Directory admin page and grant permission to the current user.

5. After the operation succeeds, the following notification will show up in the top-right corner:

6. Now you can choose your Azure AD admin for your SQL Managed Instance. For that, on the Active
Directory admin page, select Set admin command.
7. On the Azure AD admin page, search for a user, select the user or group to be an administrator, and then
select Select .
The Active Directory admin page shows all members and groups of your Active Directory. Users or
groups that are grayed out can't be selected because they aren't supported as Azure AD administrators.
See the list of supported admins in Azure AD Features and Limitations. Azure role-based access control
(Azure RBAC) applies only to the Azure portal and isn't propagated to SQL Database, SQL Managed
Instance, or Azure Synapse.

8. At the top of the Active Directory admin page, select Save .


The process of changing the administrator may take several minutes. Then the new administrator
appears in the Active Directory admin box.
For Azure AD users and groups, the Object ID is displayed next to the admin name. For applications
(service principals), the Application ID is displayed.
After provisioning an Azure AD admin for your SQL Managed Instance, you can begin to create Azure AD server
principals (logins) with the CREATE LOGIN syntax. For more information, see SQL Managed Instance overview.

TIP
To later remove an Admin, at the top of the Active Directory admin page, select Remove admin , and then select Save .

PowerShell
To grant your SQL Managed Instance Azure AD read permission by using the PowerShell, run this script:
# Gives Azure Active Directory read permission to a Service Principal representing the SQL Managed Instance.
# Can be executed only by a "Global Administrator" or "Privileged Role Administrator" type of user.

$aadTenant = "<YourTenantId>" # Enter your tenant ID


$managedInstanceName = "MyManagedInstance"

# Get Azure AD role "Directory Users" and create if it doesn't exist


$roleName = "Directory Readers"
$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq $roleName}
if ($role -eq $null) {
# Instantiate an instance of the role template
$roleTemplate = Get-AzureADDirectoryRoleTemplate | Where-Object {$_.displayName -eq $roleName}
Enable-AzureADDirectoryRole -RoleTemplateId $roleTemplate.ObjectId
$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq $roleName}
}

# Get service principal for your SQL Managed Instance


$roleMember = Get-AzureADServicePrincipal -SearchString $managedInstanceName
$roleMember.Count
if ($roleMember -eq $null) {
Write-Output "Error: No Service Principals with name '$ ($managedInstanceName)', make sure that
managedInstanceName parameter was entered correctly."
exit
}
if (-not ($roleMember.Count -eq 1)) {
Write-Output "Error: More than one service principal with name pattern '$ ($managedInstanceName)'"
Write-Output "Dumping selected service principals...."
$roleMember
exit
}

# Check if service principal is already member of readers role


$allDirReaders = Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId
$selDirReader = $allDirReaders | where{$_.ObjectId -match $roleMember.ObjectId}

if ($selDirReader -eq $null) {


# Add principal to readers role
Write-Output "Adding service principal '$($managedInstanceName)' to 'Directory Readers' role'..."
Add-AzureADDirectoryRoleMember -ObjectId $role.ObjectId -RefObjectId $roleMember.ObjectId
Write-Output "'$($managedInstanceName)' service principal added to 'Directory Readers' role'..."

#Write-Output "Dumping service principal '$($managedInstanceName)':"


#$allDirReaders = Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId
#$allDirReaders | where{$_.ObjectId -match $roleMember.ObjectId}
}
else {
Write-Output "Service principal '$($managedInstanceName)' is already member of 'Directory Readers'
role'."
}

PowerShell for SQL Managed Instance


PowerShell
Azure CLI

To run PowerShell cmdlets, you need to have Azure PowerShell installed and running. For detailed information,
see How to install and configure Azure PowerShell.
IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Managed Instance, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.

To provision an Azure AD admin, execute the following Azure PowerShell commands:


Connect-AzAccount
Select-AzSubscription
The cmdlets used to provision and manage Azure AD admin for your SQL Managed Instance are listed in the
following table:

C M DL ET N A M E DESC RIP T IO N

Set-AzSqlInstanceActiveDirectoryAdministrator Provisions an Azure AD administrator for the SQL Managed


Instance in the current subscription. (Must be from the
current subscription)

Remove-AzSqlInstanceActiveDirectoryAdministrator Removes an Azure AD administrator for the SQL Managed


Instance in the current subscription.

Get-AzSqlInstanceActiveDirectoryAdministrator Returns information about an Azure AD administrator for


the SQL Managed Instance in the current subscription.

The following command gets information about an Azure AD administrator for a SQL Managed Instance named
ManagedInstance01 that is associated with a resource group named ResourceGroup01.

Get-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName "ResourceGroup01" -InstanceName


"ManagedInstance01"

The following command provisions an Azure AD administrator group named DBAs for the SQL Managed
Instance named ManagedInstance01. This server is associated with resource group ResourceGroup01.

Set-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName "ResourceGroup01" -InstanceName


"ManagedInstance01" -DisplayName "DBAs" -ObjectId "40b79501-b343-44ed-9ce7-da4c8cc7353b"

The following command removes the Azure AD administrator for the SQL Managed Instance named
ManagedInstanceName01 associated with the resource group ResourceGroup01.

Remove-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName "ResourceGroup01" -InstanceName


"ManagedInstanceName01" -Confirm -PassThru

Provision Azure AD admin (SQL Database)


IMPORTANT
Only follow these steps if you are provisioning a server for SQL Database or Azure Synapse.

The following two procedures show you how to provision an Azure Active Directory administrator for your
server in the Azure portal and by using PowerShell.
Azure portal
1. In the Azure portal, in the upper-right corner, select your connection to drop down a list of possible Active
Directories. Choose the correct Active Directory as the default Azure AD. This step links the subscription-
associated Active Directory with server making sure that the same subscription is used for both Azure AD
and the server.
2. Search for and select SQL ser ver .

NOTE
On this page, before you select SQL ser vers , you can select the star next to the name to favorite the category
and add SQL ser vers to the left navigation bar.

3. On the SQL Ser ver page, select Active Director y admin .


4. In the Active Director y admin page, select Set admin .

5. In the Add admin page, search for a user, select the user or group to be an administrator, and then select
Select . (The Active Directory admin page shows all members and groups of your Active Directory. Users
or groups that are grayed out cannot be selected because they are not supported as Azure AD
administrators. (See the list of supported admins in the Azure AD Features and Limitations section of
Use Azure Active Directory Authentication for authentication with SQL Database or Azure Synapse.)
Azure role-based access control (Azure RBAC) applies only to the portal and is not propagated to SQL
Server.
6. At the top of the Active Director y admin page, select Save .

For Azure AD users and groups, the Object ID is displayed next to the admin name. For applications
(service principals), the Application ID is displayed.
The process of changing the administrator may take several minutes. Then the new administrator appears in the
Active Director y admin box.

NOTE
When setting up the Azure AD admin, the new admin name (user or group) cannot already be present in the virtual
master database as a server authentication user. If present, the Azure AD admin setup will fail; rolling back its creation and
indicating that such an admin (name) already exists. Since such a server authentication user is not part of the Azure AD,
any effort to connect to the server using Azure AD authentication fails.

To later remove an Admin, at the top of the Active Director y admin page, select Remove admin , and then
select Save .
PowerShell for SQL Database and Azure Synapse
PowerShell
Azure CLI

To run PowerShell cmdlets, you need to have Azure PowerShell installed and running. For detailed information,
see How to install and configure Azure PowerShell. To provision an Azure AD admin, execute the following Azure
PowerShell commands:
Connect-AzAccount
Select-AzSubscription
Cmdlets used to provision and manage Azure AD admin for SQL Database and Azure Synapse:

C M DL ET N A M E DESC RIP T IO N

Set-AzSqlServerActiveDirectoryAdministrator Provisions an Azure Active Directory administrator for the


server hosting SQL Database or Azure Synapse. (Must be
from the current subscription)

Remove-AzSqlServerActiveDirectoryAdministrator Removes an Azure Active Directory administrator for the


server hosting SQL Database or Azure Synapse.

Get-AzSqlServerActiveDirectoryAdministrator Returns information about an Azure Active Directory


administrator currently configured for the server hosting
SQL Database or Azure Synapse.

Use PowerShell command get-help to see more information for each of these commands. For example,
get-help Set-AzSqlServerActiveDirectoryAdministrator .

The following script provisions an Azure AD administrator group named DBA_Group (object ID
40b79501-b343-44ed-9ce7-da4c8cc7353f ) for the demo_ser ver server in a resource group named Group-23 :

Set-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "Group-23" -ServerName "demo_server" -


DisplayName "DBA_Group"

The DisplayName input parameter accepts either the Azure AD display name or the User Principal Name. For
example, DisplayName="John Smith" and DisplayName="johns@contoso.com" . For Azure AD groups only the Azure
AD display name is supported.

NOTE
The Azure PowerShell command Set-AzSqlServerActiveDirectoryAdministrator does not prevent you from
provisioning Azure AD admins for unsupported users. An unsupported user can be provisioned, but can not connect to a
database.

The following example uses the optional ObjectID :

Set-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "Group-23" -ServerName "demo_server" `


-DisplayName "DBA_Group" -ObjectId "40b79501-b343-44ed-9ce7-da4c8cc7353f"

NOTE
The Azure AD ObjectID is required when the DisplayName is not unique. To retrieve the ObjectID and DisplayName
values, use the Active Directory section of Azure Classic Portal, and view the properties of a user or group.

The following example returns information about the current Azure AD admin for the server:
Get-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "Group-23" -ServerName "demo_server" |
Format-List

The following example removes an Azure AD administrator:

Remove-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName "Group-23" -ServerName "demo_server"

NOTE
You can also provision an Azure Active Directory Administrator by using the REST APIs. For more information, see Service
Management REST API Reference and Operations for Azure SQL Database Operations for Azure SQL Database

Configure your client computers


NOTE
System.Data.SqlClient uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're
using the System.Data.SqlClient namespace for Azure Active Directory authentication, migrate applications to
Microsoft.Data.SqlClient and the Microsoft Authentication Library (MSAL). For more information about using Azure AD
authentication with SqlClient, see Using Azure Active Directory authentication with SqlClient.
SSMS and SSDT still uses the Azure Active Directory Authentication Library (ADAL). If you want to continue using
ADAL.DLL in your applications, you can use the links in this section to install the latest SSMS, ODBC, and OLE DB driver
that contains the latest ADAL.DLL library.

On all client machines, from which your applications or users connect to SQL Database or Azure Synapse using
Azure AD identities, you must install the following software:
.NET Framework 4.6 or later from https://msdn.microsoft.com/library/5a4x27ek.aspx.
Microsoft Authentication Library (MSAL) or Azure Active Directory Authentication Library for SQL Server
(ADAL.DLL). Below are the download links to install the latest SSMS, ODBC, and OLE DB driver that contains
the ADAL.DLL library.
SQL Server Management Studio
ODBC Driver 17 for SQL Server
OLE DB Driver 18 for SQL Server
You can meet these requirements by:
Installing the latest version of SQL Server Management Studio or SQL Server Data Tools meets the .NET
Framework 4.6 requirement.
SSMS installs the x86 version of ADAL.DLL.
SSDT installs the amd64 version of ADAL.DLL.
The latest Visual Studio from Visual Studio Downloads meets the .NET Framework 4.6 requirement,
but does not install the required amd64 version of ADAL.DLL.

Create contained users mapped to Azure AD identities


Because SQL Managed Instance supports Azure AD server principals (logins), using contained database users is
not required. Azure AD server principals (logins) enable you to create logins from Azure AD users, groups, or
applications. This means that you can authenticate with your SQL Managed Instance by using the Azure AD
server login rather than a contained database user. For more information, see SQL Managed Instance overview.
For syntax on creating Azure AD server principals (logins), see CREATE LOGIN.
However, using Azure Active Directory authentication with SQL Database and Azure Synapse requires using
contained database users based on an Azure AD identity. A contained database user does not have a login in the
master database, and maps to an identity in Azure AD that is associated with the database. The Azure AD identity
can be either an individual user account or a group. For more information about contained database users, see
Contained Database Users- Making Your Database Portable.

NOTE
Database users (with the exception of administrators) cannot be created using the Azure portal. Azure roles are not
propagated to the database in SQL Database, the SQL Managed Instance, or Azure Synapse. Azure roles are used for
managing Azure Resources, and do not apply to database permissions. For example, the SQL Ser ver Contributor role
does not grant access to connect to the database in SQL Database, the SQL Managed Instance, or Azure Synapse. The
access permission must be granted directly in the database using Transact-SQL statements.

WARNING
Special characters like colon : or ampersand & when included as user names in the T-SQL CREATE LOGIN and
CREATE USER statements are not supported.

IMPORTANT
Azure AD users and service principals (Azure AD applications) that are members of more than 2048 Azure AD security
groups are not supported to login into the database in SQL Database, Managed Instance, or Azure Synapse.

To create an Azure AD-based contained database user (other than the server administrator that owns the
database), connect to the database with an Azure AD identity, as a user with at least the ALTER ANY USER
permission. Then use the following Transact-SQL syntax:

CREATE USER [<Azure_AD_principal_name>] FROM EXTERNAL PROVIDER;

Azure_AD_principal_name can be the user principal name of an Azure AD user or the display name for an Azure
AD group.
Examples: To create a contained database user representing an Azure AD federated or managed domain user:

CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER;


CREATE USER [alice@fabrikam.onmicrosoft.com] FROM EXTERNAL PROVIDER;

To create a contained database user representing an Azure AD or federated domain group, provide the display
name of a security group:

CREATE USER [ICU Nurses] FROM EXTERNAL PROVIDER;

To create a contained database user representing an application that connects using an Azure AD token:

CREATE USER [appName] FROM EXTERNAL PROVIDER;


NOTE
This command requires that SQL access Azure AD (the "external provider") on behalf of the logged-in user. Sometimes,
circumstances will arise that cause Azure AD to return an exception back to SQL. In these cases, the user will see SQL
error 33134, which should contain the Azure AD-specific error message. Most of the time, the error will say that access is
denied, or that the user must enroll in MFA to access the resource, or that access between first-party applications must be
handled via preauthorization. In the first two cases, the issue is usually caused by Conditional Access policies that are set
in the user's Azure AD tenant: they prevent the user from accessing the external provider. Updating the Conditional
Access policies to allow access to the application '00000003-0000-0000-c000-000000000000' (the application ID of the
Microsoft Graph API) should resolve the issue. In the case that the error says access between first-party applications must
be handled via preauthorization, the issue is because the user is signed in as a service principal. The command should
succeed if it is executed by a user instead.

TIP
You cannot directly create a user from an Azure Active Directory other than the Azure Active Directory that is associated
with your Azure subscription. However, members of other Active Directories that are imported users in the associated
Active Directory (known as external users) can be added to an Active Directory group in the tenant Active Directory. By
creating a contained database user for that AD group, the users from the external Active Directory can gain access to SQL
Database.

For more information about creating contained database users based on Azure Active Directory identities, see
CREATE USER (Transact-SQL).

NOTE
Removing the Azure Active Directory administrator for the server prevents any Azure AD authentication user from
connecting to the server. If necessary, unusable Azure AD users can be dropped manually by a SQL Database
administrator.

NOTE
If you receive a Connection Timeout Expired , you may need to set the TransparentNetworkIPResolution parameter
of the connection string to false. For more information, see Connection timeout issue with .NET Framework 4.6.1 -
TransparentNetworkIPResolution.

When you create a database user, that user receives the CONNECT permission and can connect to that
database as a member of the PUBLIC role. Initially the only permissions available to the user are any
permissions granted to the PUBLIC role, or any permissions granted to any Azure AD groups that they are a
member of. Once you provision an Azure AD-based contained database user, you can grant the user additional
permissions, the same way as you grant permission to any other type of user. Typically grant permissions to
database roles, and add users to roles. For more information, see Database Engine Permission Basics. For more
information about special SQL Database roles, see Managing Databases and Logins in Azure SQL Database. A
federated domain user account that is imported into a managed domain as an external user, must use the
managed domain identity.

NOTE
Azure AD users are marked in the database metadata with type E (EXTERNAL_USER) and for groups with type X
(EXTERNAL_GROUPS). For more information, see sys.database_principals.
Connect to the database using SSMS or SSDT
To confirm the Azure AD administrator is properly set up, connect to the master database using the Azure AD
administrator account. To provision an Azure AD-based contained database user (other than the server
administrator that owns the database), connect to the database with an Azure AD identity that has access to the
database.

IMPORTANT
Support for Azure Active Directory authentication is available with SQL Server Management Studio (SSMS) starting in
2016 and SQL Server Data Tools starting in 2015. The August 2016 release of SSMS also includes support for Active
Directory Universal Authentication, which allows administrators to require Multi-Factor Authentication using a phone call,
text message, smart cards with pin, or mobile app notification.

Using an Azure AD identity to connect using SSMS or SSDT


The following procedures show you how to connect to SQL Database with an Azure AD identity using SQL
Server Management Studio or SQL Server Database Tools.
Active Directory integrated authentication
Use this method if you are logged into Windows using your Azure Active Directory credentials from a federated
domain, or a managed domain that is configured for seamless single sign-on for pass-through and password
hash authentication. For more information, see Azure Active Directory Seamless Single Sign-On.
1. Start Management Studio or Data Tools and in the Connect to Ser ver (or Connect to Database
Engine ) dialog box, in the Authentication box, select Azure Active Director y - Integrated . No
password is needed or can be entered because your existing credentials will be presented for the
connection.

2. Select the Options button, and on the Connection Proper ties page, in the Connect to database box,
type the name of the user database you want to connect to. For more information, see the article Multi-
factor Azure AD auth on the differences between the Connection Properties for SSMS 17.x and 18.x.
Active Directory password authentication
Use this method when connecting with an Azure AD principal name using the Azure AD managed domain. You
can also use it for federated accounts without access to the domain, for example, when working remotely.
Use this method to authenticate to the database in SQL Database or the SQL Managed Instance with Azure AD
cloud-only identity users, or those who use Azure AD hybrid identities. This method supports users who want to
use their Windows credential, but their local machine is not joined with the domain (for example, using remote
access). In this case, a Windows user can indicate their domain account and password, and can authenticate to
the database in SQL Database, the SQL Managed Instance, or Azure Synapse.
1. Start Management Studio or Data Tools and in the Connect to Ser ver (or Connect to Database
Engine ) dialog box, in the Authentication box, select Azure Active Director y - Password .
2. In the User name box, type your Azure Active Directory user name in the format
username@domain.com . User names must be an account from Azure Active Directory or an account
from a managed or federated domain with Azure Active Directory.
3. In the Password box, type your user password for the Azure Active Directory account or
managed/federated domain account.
4. Select the Options button, and on the Connection Proper ties page, in the Connect to database box,
type the name of the user database you want to connect to. (See the graphic in the previous option.)
Active Directory interactive authentication
Use this method for interactive authentication with or without Multi-Factor Authentication (MFA), with password
being requested interactively. This method can be used to authenticate to the database in SQL Database, the SQL
Managed Instance, and Azure Synapse for Azure AD cloud-only identity users, or those who use Azure AD
hybrid identities.
For more information, see Using multi-factor Azure AD authentication with SQL Database and Azure Synapse
(SSMS support for MFA).

Using an Azure AD identity to connect from a client application


The following procedures show you how to connect to a SQL Database with an Azure AD identity from a client
application.
Active Directory integrated authentication
To use integrated Windows authentication, your domain's Active Directory must be federated with Azure Active
Directory, or should be a managed domain that is configured for seamless single sign-on for pass-through or
password hash authentication. For more information, see Azure Active Directory Seamless Single Sign-On.
Your client application (or a service) connecting to the database must be running on a domain-joined machine
under a user's domain credentials.
To connect to a database using integrated authentication and an Azure AD identity, the Authentication keyword
in the database connection string must be set to Active Directory Integrated . The following C# code sample
uses ADO .NET.
string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory
Integrated; Initial Catalog=testdb;";
SqlConnection conn = new SqlConnection(ConnectionString);
conn.Open();

The connection string keyword Integrated Security=True is not supported for connecting to Azure SQL
Database. When making an ODBC connection, you will need to remove spaces and set Authentication to
'ActiveDirectoryIntegrated'.
Active Directory password authentication
To connect to a database using Azure AD cloud-only identity user accounts, or those who use Azure AD hybrid
identities, the Authentication keyword must be set to Active Directory Password . The connection string must
contain User ID/UID and Password/PWD keywords and values. The following C# code sample uses ADO .NET.

string ConnectionString =
@"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Password; Initial
Catalog=testdb; UID=bob@contoso.onmicrosoft.com; PWD=MyPassWord!";
SqlConnection conn = new SqlConnection(ConnectionString);
conn.Open();

Learn more about Azure AD authentication methods using the demo code samples available at Azure AD
Authentication GitHub Demo.

Azure AD token
This authentication method allows middle-tier services to obtain JSON Web Tokens (JWT) to connect to the
database in SQL Database, the SQL Managed Instance, or Azure Synapse by obtaining a token from Azure AD.
This method enables various application scenarios including service identities, service principals, and
applications using certificate-based authentication. You must complete four basic steps to use Azure AD token
authentication:
1. Register your application with Azure Active Directory and get the client ID for your code.
2. Create a database user representing the application. (Completed earlier in step 6.)
3. Create a certificate on the client computer runs the application.
4. Add the certificate as a key for your application.
Sample connection string:

string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Initial Catalog=testdb;";


SqlConnection conn = new SqlConnection(ConnectionString);
conn.AccessToken = "Your JWT token";
conn.Open();

For more information, see SQL Server Security Blog. For information about adding a certificate, see Get started
with certificate-based authentication in Azure Active Directory.
sqlcmd
The following statements, connect using version 13.1 of sqlcmd, which is available from the Download Center.

NOTE
sqlcmd with the -G command does not work with system identities, and requires a user principal login.
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -U bob@contoso.com -P MyAADPassword -G -l 30

Troubleshoot Azure AD authentication


Guidance on troubleshooting issues with Azure AD authentication can be found in the following blog:
https://techcommunity.microsoft.com/t5/azure-sql-database/troubleshooting-problems-related-to-azure-ad-
authentication-with/ba-p/1062991

Next steps
For an overview of logins, users, database roles, and permissions in SQL Database, see Logins, users,
database roles, and user accounts.
For more information about database principals, see Principals.
For more information about database roles, see Database roles.
For more information about firewall rules in SQL Database, see SQL Database firewall rules.
For information about how to set an Azure AD guest user as the Azure AD admin, see Create Azure AD guest
users and set as an Azure AD admin.
For information on how to service principals with Azure SQL, see Create Azure AD users using Azure AD
applications
Using multi-factor Azure Active Directory
authentication
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support connections from SQL
Server Management Studio (SSMS) using Azure Active Directory - Universal with MFA authentication. This
article discusses the differences between the various authentication options, and also the limitations associated
with using Universal Authentication in Azure Active Directory (Azure AD) for Azure SQL.
Download the latest SSMS - On the client computer, download the latest version of SSMS, from Download
SQL Server Management Studio (SSMS).

NOTE
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate through Azure Active Directory with MFA.
To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.

For all the features discussed in this article, use at least July 2017, version 17.2. The most recent connection
dialog box, should look similar to the following image:

Authentication options
There are two non-interactive authentication models for Azure AD, which can be used in many different
applications (ADO.NET, JDCB, ODC, and so on). These two methods never result in pop-up dialog boxes:
Azure Active Directory - Password
Azure Active Directory - Integrated

The interactive method that also supports Azure AD multi-factor authentication (MFA) is:
Azure Active Directory - Universal with MFA

Azure AD MFA helps safeguard access to data and applications while meeting user demand for a simple sign-in
process. It delivers strong authentication with a range of easy verification options (phone call, text message,
smart cards with pin, or mobile app notification), allowing users to choose the method they prefer. Interactive
MFA with Azure AD can result in a pop-up dialog box for validation.
For a description of Azure AD multi-factor authentication, see multi-factor authentication. For configuration
steps, see Configure Azure SQL Database multi-factor authentication for SQL Server Management Studio.
Azure AD domain name or tenant ID parameter
Beginning with SSMS version 17, users that are imported into the current Azure AD from other Azure Active
Directories as guest users, can provide the Azure AD domain name, or tenant ID when they connect. Guest users
include users invited from other Azure ADs, Microsoft accounts such as outlook.com, hotmail.com, live.com, or
other accounts like gmail.com. This information allows Azure Active Directory - Universal with MFA
authentication to identify the correct authenticating authority. This option is also required to support Microsoft
accounts (MSA) such as outlook.com, hotmail.com, live.com, or non-MSA accounts.
All guest users who want to be authenticated using Universal Authentication must enter their Azure AD domain
name or tenant ID. This parameter represents the current Azure AD domain name or tenant ID that the Azure
SQL logical server is associated with. For example, if the SQL logical server is associated with the Azure AD
domain contosotest.onmicrosoft.com , where user joe@contosodev.onmicrosoft.com is hosted as an imported
user from the Azure AD domain contosodev.onmicrosoft.com , the domain name required to authenticate this
user is contosotest.onmicrosoft.com . When the user is a native user of the Azure AD associated to SQL logical
server, and is not an MSA account, no domain name or tenant ID is required. To enter the parameter (beginning
with SSMS version 17.2):
1. Open a connection in SSMS. Input your server name, and select Azure Active Director y - Universal
with MFA authentication. Add the User name that you want to sign in with.
2. Select the Options box, and go over to the Connection Proper ties tab. In the Connect to Database
dialog box, complete the dialog box for your database. Check the AD domain name or tenant ID box,
and provide authenticating authority, such as the domain name (contosotest.onmicrosoft.com ) or the
GUID of the tenant ID.
If you are running SSMS 18.x or later, the AD domain name or tenant ID is no longer needed for guest users
because 18.x or later automatically recognizes it.
Azure AD business to business support
Azure AD users that are supported for Azure AD B2B scenarios as guest users (see What is Azure B2B
collaboration) can connect to SQL Database and Azure Synapse as individual users or members of an Azure AD
group created in the associated Azure AD, and mapped manually using the CREATE USER (Transact-SQL)
statement in a given database.
For example, if steve@gmail.com is invited to Azure AD contosotest (with the Azure AD domain
contosotest.onmicrosoft.com ), a user steve@gmail.com must be created for a specific database (such as
MyDatabase ) by an Azure AD SQL administrator or Azure AD DBO by executing the Transact-SQL
create user [steve@gmail.com] FROM EXTERNAL PROVIDER statement. If steve@gmail.com is part of an Azure AD
group, such as usergroup then this group must be created for a specific database (such as MyDatabase ) by an
Azure AD SQL administrator, or Azure AD DBO by executing the Transact-SQL statement
create user [usergroup] FROM EXTERNAL PROVIDER statement.

After the database user or group is created, then the user steve@gmail.com can sign into MyDatabase using the
SSMS authentication option Azure Active Directory – Universal with MFA . By default, the user or group only
has connect permission. Any further data access will need to be granted in the database by a user with enough
privilege.
NOTE
For SSMS 17.x, using steve@gmail.com as a guest user, you must check the AD domain name or tenant ID box and
add the AD domain name contosotest.onmicrosoft.com in the Connection Proper ty dialog box. The AD domain
name or tenant ID option is only supported for the Azure Active Director y - Universal with MFA authentication.
Otherwise, the check box it is greyed out.

Universal Authentication limitations


SSMS and SqlPackage.exe are the only tools currently enabled for MFA through Active Directory Universal
Authentication.
SSMS version 17.2 supports multi-user concurrent access using Universal Authentication with MFA. For
SSMS version 17.0 and 17.1, the tool restricts a login for an instance of SSMS using Universal Authentication
to a single Azure Active Directory account. To sign in as another Azure AD account, you must use another
instance of SSMS. This restriction is limited to Active Directory Universal Authentication; you can sign into a
different server using Azure Active Directory - Password authentication,
Azure Active Directory - Integrated authentication, or SQL Server Authentication .
SSMS supports Active Directory Universal Authentication for Object Explorer, Query Editor, and Query Store
visualization.
SSMS version 17.2 provides DacFx Wizard support for Export/Extract/Deploy Data database. Once a specific
user is authenticated through the initial authentication dialog using Universal Authentication, the DacFx
Wizard functions the same way it does for all other authentication methods.
The SSMS Table Designer does not support Universal Authentication.
There are no additional software requirements for Active Directory Universal Authentication except that you
must use a supported version of SSMS.
See the following link for the latest Microsoft Authentication Library (MSAL) version for Universal
authentication: Overview of the Microsoft Authentication Library (MSAL).

Next steps
For configuration steps, see Configure Azure SQL Database multi-factor authentication for SQL Server
Management Studio.
Grant others access to your database: SQL Database Authentication and Authorization: Granting Access
Make sure others can connect through the firewall: Configure a server-level firewall rule using the Azure
portal
Configure and manage Azure Active Directory authentication with SQL Database or Azure Synapse
Create Azure AD guest users and set as an Azure AD admin
Microsoft SQL Server Data-Tier Application Framework (17.0.0 GA)
SQLPackage.exe
Import a BACPAC file to a new database
Export a database to a BACPAC file
C# interface IUniversalAuthProvider Interface
Configure multi-factor authentication for SQL
Server Management Studio and Azure AD
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article shows you how to use Azure Active Directory (Azure AD) multi-factor authentication (MFA) with SQL
Server Management Studio (SSMS). Azure AD MFA can be used when connecting SSMS or SqlPackage.exe to
Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics. For an overview of multi-
factor authentication, see Universal Authentication with SQL Database, SQL Managed Instance, and Azure
Synapse (SSMS support for MFA).

IMPORTANT
Databases in Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse are referred to collectively in the
remainder of this article as databases, and the server is referring to the server that hosts databases for Azure SQL
Database and Azure Synapse.

Configuration steps
1. Configure an Azure Active Director y - For more information, see Administering your Azure AD
directory, Integrating your on-premises identities with Azure Active Directory, Add your own domain name
to Azure AD, Microsoft Azure now supports federation with Windows Server Active Directory, and Manage
Azure AD using Windows PowerShell.
2. Configure MFA - For step-by-step instructions, see What is Azure AD Multi-Factor Authentication?,
Conditional Access (MFA) with Azure SQL Database and Data Warehouse. (Full Conditional Access requires a
Premium Azure Active Directory. Limited MFA is available with a standard Azure AD.)
3. Configure Azure AD Authentication - For step-by-step instructions, see Connecting to SQL Database,
SQL Managed Instance, or Azure Synapse using Azure Active Directory Authentication.
4. Download SSMS - On the client computer, download the latest SSMS, from Download SQL Server
Management Studio (SSMS).

Connecting by using universal authentication with SSMS


The following steps show how to connect using the latest SSMS.

NOTE
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate through Azure Active Directory with MFA.
To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.

1. To connect using Universal Authentication, on the Connect to Ser ver dialog box in SQL Server
Management Studio (SSMS), select Active Director y - Universal with MFA suppor t . (If you see
Active Director y Universal Authentication you are not on the latest version of SSMS.)
2. Complete the User name box with the Azure Active Directory credentials, in the format
user_name@domain.com .

3. If you are connecting as a guest user, you no longer need to complete the AD domain name or tenant ID
field for guest users because SSMS 18.x or later automatically recognizes it. For more information, see
Universal Authentication with SQL Database, SQL Managed Instance, and Azure Synapse (SSMS support
for MFA).

However, If you are connecting as a guest user using SSMS 17.x or older, you must click Options , and on
the Connection Proper ty dialog box, and complete the AD domain name or tenant ID box.
4. Select Options and specify the database on the Options dialog box. (If the connected user is a guest
user (i.e. joe@outlook.com), you must check the box and add the current AD domain name or tenant ID
as part of Options. See Universal Authentication with SQL Database and Azure Synapse Analytics (SSMS
support for MFA). Then click Connect .
5. When the Sign in to your account dialog box appears, provide the account and password of your
Azure Active Directory identity. No password is required if a user is part of a domain federated with
Azure AD.
NOTE
For Universal Authentication with an account that does not require MFA, you connect at this point. For users
requiring MFA, continue with the following steps:

6. Two MFA setup dialog boxes might appear. This one time operation depends on the MFA administrator
setting, and therefore may be optional. For an MFA enabled domain this step is sometimes pre-defined
(for example, the domain requires users to use a smartcard and pin).
7. The second possible one time dialog box allows you to select the details of your authentication method.
The possible options are configured by your administrator.

8. The Azure Active Directory sends the confirming information to you. When you receive the verification
code, enter it into the Enter verification code box, and click Sign in .
When verification is complete, SSMS connects normally presuming valid credentials and firewall access.

Next steps
For an overview of multi-factor authentication, see Universal Authentication with SQL Database, SQL
Managed Instance, and Azure Synapse (SSMS support for MFA).
Grant others access to your database: SQL Database Authentication and Authorization: Granting Access
Make sure others can connect through the firewall: Configure a server-level firewall rule using the Azure
portal
Conditional Access with Azure SQL Database and
Azure Synapse Analytics
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support Microsoft Conditional
Access.
The following steps show how to configure Azure SQL Database, SQL Managed Instance, or Azure Synapse to
enforce a Conditional Access policy.

Prerequisites
You must configure Azure SQL Database, Azure SQL Managed Instance, or dedicated SQL pool in Azure
Synapse to support Azure Active Directory (Azure AD) authentication. For specific steps, see Configure and
manage Azure Active Directory authentication with SQL Database or Azure Synapse.
When Multi-Factor Authentication is enabled, you must connect with a supported tool, such as the latest SQL
Server Management Studio (SSMS). For more information, see Configure Azure SQL Database multi-factor
authentication for SQL Server Management Studio.

Configure conditional access


NOTE
The below example uses Azure SQL Database, but you should select the appropriate product that you want to configure
conditional access.

1. Sign in to the Azure portal, select Azure Active Director y , and then select Conditional Access . For
more information, see Azure Active Directory Conditional Access technical reference.
2. In the Conditional Access-Policies blade, click New policy , provide a name, and then click Configure
rules .
3. Under Assignments , select Users and groups , check Select users and groups , and then select the
user or group for Conditional Access. Click Select , and then click Done to accept your selection.
4. Select Cloud apps , click Select apps . You see all apps available for Conditional Access. Select Azure
SQL Database , at the bottom click Select , and then click Done .
If you can't find Azure SQL Database listed in the following third screenshot, complete the following
steps:
Connect to your database in Azure SQL Database by using SSMS with an Azure AD admin account.
Execute CREATE USER [user@yourtenant.com] FROM EXTERNAL PROVIDER .
Sign into Azure AD and verify that Azure SQL Database, SQL Managed Instance, or Azure Synapse are
listed in the applications in your Azure AD instance.
5. Select Access controls , select Grant , and then check the policy you want to apply. For this example, we
select Require multi-factor authentication .
Summary
The selected application (Azure SQL Database) using Azure AD Premium, now enforces the selected Conditional
Access policy, Required multi-factor authentication.
For questions about Azure SQL Database and Azure Synapse regarding multi-factor authentication, contact
MFAforSQLDB@microsoft.com.

Next steps
For a tutorial, see Secure your database in SQL Database.
Azure Active Directory server principals
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)

NOTE
Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure
SQL Managed Instance can already utilize Azure AD logins.

You can now create and utilize Azure AD server principals, which are logins in the virtual master database of a
SQL Database. There are several benefits of using Azure AD server principals for SQL Database:
Support Azure SQL Database server roles for permission management.
Support multiple Azure AD users with special roles for SQL Database, such as the loginmanager and
dbmanager roles.
Functional parity between SQL logins and Azure AD logins.
Increase functional improvement support, such as utilizing Azure AD-only authentication. Azure AD-only
authentication allows SQL authentication to be disabled, which includes the SQL server admin, SQL logins
and users.
Allows Azure AD principals to support geo-replicas. Azure AD principals will be able to connect to the geo-
replica of a user database, with a read-only permission and deny permission to the primary server.
Ability to use Azure AD service principal logins with special roles to execute a full automation of user and
database creation, as well as maintenance provided by Azure AD applications.
Closer functionality between Managed Instance and SQL Database, as Managed Instance already supports
Azure AD logins in the master database.
For more information on Azure AD authentication in Azure SQL, see Use Azure Active Directory authentication

Permissions
The following permissions are required to utilize or create Azure AD logins in the virtual master database.
Azure AD admin permission or membership in the loginmanager server role. The first Azure AD login can
only be created by the Azure AD admin.
Must be a member of Azure AD within the same directory used for Azure SQL Database
By default, the standard permission granted to newly created Azure AD login in the master database is VIEW
ANY DATABASE .

Azure AD logins syntax


New syntax for Azure SQL Database to use Azure AD server principals has been introduced with this feature
release.
Create login syntax
CREATE LOGIN login_name { FROM EXTERNAL PROVIDER | WITH <option_list> [,..] }

<option_list> ::=
PASSWORD = {'password'}
| , SID = sid, ]

The login_name specifies the Azure AD principal, which is an Azure AD user, group, or application.
For more information, see CREATE LOGIN (Transact-SQL).
Create user syntax
The below T-SQL syntax is already available in SQL Database, and can be used for creating database-level Azure
AD principals mapped to Azure AD logins in the virtual master database.
To create an Azure AD user from an Azure AD login, use the following syntax. Only the Azure AD admin can
execute this command in the virtual master database.

CREATE USER user_name FROM LOGIN login_name

For more information, see CREATE USER (Transact-SQL).


Disable or enable a login using ALTER LOGIN syntax
The ALTER LOGIN (Transact-SQL) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL
Database.

ALTER LOGIN login_name DISABLE

The Azure AD principal login_name won't be able to log into any user database in the SQL Database logical
server where an Azure AD user principal, user_name mapped to login login_name was created.

NOTE
ALTER LOGIN login_name DISABLE is not supported for contained users.
ALTER LOGIN login_name DISABLE is not supported for Azure AD groups.
An individual disabled login cannot belong to a user who is part of a login group created in the master database
(for example, an Azure AD admin group).
For the DISABLE or ENABLE changes to take immediate effect, the authentication cache and the
TokenAndPermUserStore cache must be cleared using the T-SQL commands.

DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS

Roles for Azure AD principals


Special roles for SQL Database can be assigned to users in the virtual master database for Azure AD principals,
including dbmanager and loginmanager .
Azure SQL Database server roles can be assigned to logins in the virtual master database.
For a tutorial on how to grant these roles, see Tutorial: Create and utilize Azure Active Directory server logins.
Limitations and remarks
The SQL server admin can’t create Azure AD logins or users in any databases.
Changing a database ownership to an Azure AD group as database owner isn't supported.
ALTER AUTHORIZATION ON database::<mydb> TO [my_aad_group] fails with an error message:

Msg 33181, Level 16, State 1, Line 4


The new owner cannot be Azure Active Directory group.

Changing a database ownership to an individual user is supported.


A SQL admin or SQL user can’t execute the following Azure AD operations:
CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER
CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER
EXECUTE AS USER [bob@contoso.com]
ALTER AUTHORIZATION ON securable::name TO [bob@contoso.com]
Impersonation of Azure AD server-level principals (logins) isn't supported:
EXECUTE AS Clause (Transact-SQL)
EXECUTE AS (Transact-SQL)
Impersonation of Azure AD database-level principals (users) at a user database-level is still supported.
Azure AD logins overlapping with Azure AD administrator aren't supported. Azure AD admin takes
precedence over any login. If an Azure AD account already has access to the server as an Azure AD admin,
either directly or as a member of the admin group, the login created for this user won't have any effect. The
login creation isn't blocked through T-SQL. After the account authenticates to the server, the login will have
the effective permissions of an Azure AD admin, and not of a newly created login.
Changing permissions on specific Azure AD login object isn't supported:
GRANT <PERMISSION> ON LOGIN :: <Azure AD account> TO <Any other login>
When permissions are altered for an Azure AD login with existing open connections to an Azure SQL
Database, permissions aren't effective until the user reconnects. Also flush the authentication cache and the
TokenAndPermUserStore cache. This applies to server role membership change using the ALTER SERVER
ROLE statement.
Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported.
Azure SQL Database server roles aren't supported for Azure AD groups.

Next steps
Tutorial: Create and utilize Azure Active Directory server logins
Azure Active Directory service principal with Azure
SQL
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)
Azure Active Directory (Azure AD) supports user creation in Azure SQL Database (SQL DB) on behalf of Azure
AD applications (service principals). This is supported for Azure SQL Database and Azure SQL Managed
Instance, as well as to both dedicated SQL pools in Azure Synapse workspaces and dedicated SQL pools
(formerly SQL DW).

Service principal (Azure AD applications) support


This article applies to applications that are integrated with Azure AD, and are part of Azure AD registration.
These applications often need authentication and authorization access to Azure SQL to perform various tasks.
This feature allows service principals to create Azure AD users in SQL Database. There was a limitation
preventing Azure AD object creation on behalf of Azure AD applications that was removed.
When an Azure AD application is registered using the Azure portal or a PowerShell command, two objects are
created in the Azure AD tenant:
An application object
A service principal object
For more information on Azure AD applications, see Application and service principal objects in Azure Active
Directory and Create an Azure service principal with Azure PowerShell.
SQL Database and SQL Managed Instance support the following Azure AD objects:
Azure AD users (managed, federated, and guest)
Azure AD groups (managed and federated)
Azure AD applications
The T-SQL command CREATE USER [Azure_AD_Object] FROM EXTERNAL PROVIDER on behalf of an Azure AD
application is now supported for SQL Database.

Functionality of Azure AD user creation using service principals


Supporting this functionality is useful in Azure AD application automation processes where Azure AD objects
are created and maintained in SQL Database without human interaction. Service principals can be an Azure AD
admin for the SQL logical server, as part of a group or an individual user. The application can automate Azure
AD object creation in SQL Database when executed as a system administrator, and does not require any
additional SQL privileges. This allows for a full automation of a database user creation. This feature also
supports Azure AD system-assigned managed identity and user-assigned managed identity that can be created
as users in SQL Database on behalf of service principals. For more information, see What are managed identities
for Azure resources?

Enable service principals to create Azure AD users


To enable an Azure AD object creation in SQL Database on behalf of an Azure AD application, the following
settings are required:
1. Assign the server identity. The assigned server identity represents the Managed Service Identity (MSI).
The server identity can be system-assigned or user-assigned managed identity. For more information, see
User-assigned managed identity in Azure AD for Azure SQL.
For a new Azure SQL logical server, execute the following PowerShell command:

New-AzSqlServer -ResourceGroupName <resource group> -Location <Location name> -ServerName <Server


name> -ServerVersion "12.0" -SqlAdministratorCredentials (Get-Credential) -AssignIdentity

For more information, see the New-AzSqlServer command, or New-AzSqlInstance command for SQL
Managed Instance.
For existing Azure SQL Logical servers, execute the following command:

Set-AzSqlServer -ResourceGroupName <resource group> -ServerName <Server name> -AssignIdentity

For more information, see the Set-AzSqlServer command, or Set-AzSqlInstance command for SQL
Managed Instance.
To check if the server identity is assigned to the server, execute the Get-AzSqlServer command.

NOTE
Server identity can be assigned using REST API and CLI commands as well. For more information, see az sql server
create, az sql server update, and Servers - REST API.

2. Grant the Azure AD Director y Readers permission to the server identity created or assigned to the
server.
To grant this permission, follow the description used for SQL Managed Instance that is available in the
following article: Provision Azure AD admin (SQL Managed Instance)
The Azure AD user who is granting this permission must be part of the Azure AD Global
Administrator or Privileged Roles Administrator role.
For dedicated SQL pools in an Azure Synapse workspace, use the workspace's managed identity
instead of the Azure SQL server identity.

IMPORTANT
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL
Steps 1 and 2 must be executed in the above order. First, create or assign the server identity, followed by granting the
Director y Readers permission, or lower level permissions discussed in User-assigned managed identity in Azure AD for
Azure SQL. Omitting one of these steps, or both will cause an execution error during an Azure AD object creation in Azure
SQL on behalf of an Azure AD application.
You can assign the Director y Readers role to a group in Azure AD. The group owners can then add the managed
identity as a member of this group, which would bypass the need for a Global Administrator or Privileged Roles
Administrator to grant the Director y Readers role. For more information on this feature, see Directory Readers role in
Azure Active Directory for Azure SQL.

Troubleshooting and limitations


When creating Azure AD objects in Azure SQL on behalf of an Azure AD application without enabling server
identity and granting Director y Readers permission, or lower level permissions discussed in User-assigned
managed identity in Azure AD for Azure SQL, the operation will fail with the following possible errors. The
following example error is for a PowerShell command execution to create a SQL Database user myapp in the
article Tutorial: Create Azure AD users using Azure AD applications.
Exception calling "ExecuteNonQuery" with "0" argument(s): "'myapp' is not a valid login or you do
not have permission. Cannot find the user 'myapp', because it does not exist, or you do not have
permission."
Exception calling "ExecuteNonQuery" with "0" argument(s): "Principal 'myapp' could not be resolved.
Error message: 'Server identity is not configured. Please follow the steps in "Assign an Azure AD
identity to your server and add Directory Reader permission to your identity"
(https://aka.ms/sqlaadsetup)'"
For the above error, follow the steps to Assign an identity to the Azure SQL logical server and
Assign Directory Readers permission to the SQL logical server identity.
Setting the service principal (Azure AD application) as an Azure AD admin for SQL Database is
supported using the Azure portal, PowerShell, REST API, and CLI commands.
Using an Azure AD application with service principal from another Azure AD tenant will fail when accessing
SQL Database or SQL Managed Instance created in a different tenant. A service principal assigned to this
application must be from the same tenant as the SQL logical server or Managed Instance.
Az.Sql 2.9.0 module or higher is needed when using PowerShell to set up an individual Azure AD application
as Azure AD admin for Azure SQL. Ensure you are upgraded to the latest module.

Next steps
Tutorial: Create Azure AD users using Azure AD applications
Directory Readers role in Azure Active Directory for
Azure SQL
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure Active Directory (Azure AD) has introduced using Azure AD groups to manage role assignments. This
allows for Azure AD roles to be assigned to groups.

NOTE
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL.

When enabling a managed identity for Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse
Analytics, the Azure AD Director y Readers role can be assigned to the identity to allow read access to the
Microsoft Graph API. The managed identity of SQL Database and Azure Synapse is referred to as the server
identity. The managed identity of SQL Managed Instance is referred to as the managed instance identity, and is
automatically assigned when the instance is created. For more information on assigning a server identity to SQL
Database or Azure Synapse, see Enable service principals to create Azure AD users.
The Director y Readers role can be used as the server or instance identity to help:
Create Azure AD logins for SQL Managed Instance
Impersonate Azure AD users in Azure SQL
Migrate SQL Server users that use Windows authentication to SQL Managed Instance with Azure AD
authentication (using the ALTER USER (Transact-SQL) command)
Change the Azure AD admin for SQL Managed Instance
Allow service principals (Applications) to create Azure AD users in Azure SQL

Assigning the Directory Readers role


In order to assign the Director y Readers role to an identity, a user with Global Administrator or Privileged Role
Administrator permissions is needed. Users who often manage or deploy SQL Database, SQL Managed Instance,
or Azure Synapse may not have access to these highly privileged roles. This can often cause complications for
users that create unplanned Azure SQL resources, or need help from highly privileged role members that are
often inaccessible in large organizations.
For SQL Managed Instance, the Director y Readers role must be assigned to managed instance identity before
you can set up an Azure AD admin for the managed instance.
Assigning the Director y Readers role to the server identity isn't required for SQL Database or Azure Synapse
when setting up an Azure AD admin for the logical server. However, to enable an Azure AD object creation in
SQL Database or Azure Synapse on behalf of an Azure AD application, the Director y Readers role is required.
If the role isn't assigned to the SQL logical server identity, creating Azure AD users in Azure SQL will fail. For
more information, see Azure Active Directory service principal with Azure SQL.

Granting the Directory Readers role to an Azure AD group


You can now have a Global Administrator or Privileged Role Administrator create an Azure AD group and assign
the Director y Readers permission to the group. This will allow access to the Microsoft Graph API for members
of this group. In addition, Azure AD users who are owners of this group are allowed to assign new members for
this group, including identities of the Azure SQL logical servers.
This solution still requires a high privilege user (Global Administrator or Privileged Role Administrator) to create
a group and assign users as a one time activity, but the Azure AD group owners will be able to assign additional
members going forward. This eliminates the need to involve a high privilege user in the future to configure all
SQL Databases, SQL Managed Instances, or Azure Synapse servers in their Azure AD tenant.

Next steps
Tutorial: Assign Directory Readers role to an Azure AD group and manage role assignments
Azure AD-only authentication with Azure SQL
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)
Azure AD-only authentication is a feature within Azure SQL that allows the service to only support Azure AD
authentication, and is supported for Azure SQL Database and Azure SQL Managed Instance.
Azure AD-only authentication is also available for dedicated SQL pools (formerly SQL DW) in standalone
servers. Azure AD-only authentication can be enabled for the Azure Synapse workspace. For more information,
see Azure AD-only authentication with Azure Synapse workspaces.
SQL authentication is disabled when enabling Azure AD-only authentication in the Azure SQL environment,
including connections from SQL server administrators, logins, and users. Only users using Azure AD
authentication are authorized to connect to the server or database.
Azure AD-only authentication can be enabled or disabled using the Azure portal, Azure CLI, PowerShell, or REST
API. Azure AD-only authentication can also be configured during server creation with an Azure Resource
Manager (ARM) template.
For more information on Azure SQL authentication, see Authentication and authorization.

Feature description
When enabling Azure AD-only authentication, SQL authentication is disabled at the server or managed instance
level and prevents any authentication based on any SQL authentication credentials. SQL authentication users
won't be able to connect to the logical server for Azure SQL Database or managed instance, including all of its
databases. Although SQL authentication is disabled, new SQL authentication logins and users can still be created
by Azure AD accounts with proper permissions. Newly created SQL authentication accounts won't be allowed to
connect to the server. Enabling Azure AD-only authentication doesn't remove existing SQL authentication login
and user accounts. The feature only prevents these accounts from connecting to the server, and any database
created for this server.
You can also force servers to be created with Azure AD-only authentication enabled using Azure Policy. For more
information, see Azure Policy for Azure AD-only authentication.

Permissions
Azure AD-only authentication can be enabled or disabled by Azure AD users who are members of high
privileged Azure AD built-in roles, such as Azure subscription Owners, Contributors, and Global Administrators.
Additionally, the role SQL Security Manager can also enable or disable the Azure AD-only authentication feature.
The SQL Server Contributor and SQL Managed Instance Contributor roles won't have permissions to enable or
disable the Azure AD-only authentication feature. This is consistent with the Separation of Duties approach,
where users who can create an Azure SQL server or create an Azure AD admin, can't enable or disable security
features.
Actions required
The following actions are added to the SQL Security Manager role to allow management of the Azure AD-only
authentication feature.
Microsoft.Sql/servers/azureADOnlyAuthentications/*
Microsoft.Sql/servers/administrators/read - required only for users accessing the Azure portal Azure Active
Director y menu
Microsoft.Sql/managedInstances/azureADOnlyAuthentications/*
Microsoft.Sql/managedInstances/read
The above actions can also be added to a custom role to manage Azure AD-only authentication. For more
information, see Create and assign a custom role in Azure Active Directory.

Managing Azure AD-only authentication using APIs


IMPORTANT
The Azure AD admin must be set before enabling Azure AD-only authentication.

Azure CLI
PowerShell
REST API
ARM Template

You must have Azure CLI version 2.14.2 or higher.


name corresponds to the prefix of the server or instance name (for example, myserver ) and resource-group
corresponds to the resource the server belongs to (for example, myresource ).

Azure SQL Database


For more information, see az sql server ad-only-auth.
Enable or disable in SQL Database
Enable

az sql server ad-only-auth enable --resource-group myresource --name myserver

Disable

az sql server ad-only-auth disable --resource-group myresource --name myserver

Check the status in SQL Database

az sql server ad-only-auth get --resource-group myresource --name myserver

Azure SQL Managed Instance


For more information, see az sql mi ad-only-auth.
Enable

az sql mi ad-only-auth enable --resource-group myresource --name myserver

Disable
az sql mi ad-only-auth disable --resource-group myresource --name myserver

Check the status in SQL Managed Instance

az sql mi ad-only-auth get --resource-group myresource --name myserver

Checking Azure AD-only authentication using T -SQL


The SEVERPROPERTY IsExternalAuthenticationOnly has been added to check if Azure AD-only authentication is
enabled for your server or managed instance. 1 indicates that the feature is enabled, and 0 represents the
feature is disabled.

SELECT SERVERPROPERTY('IsExternalAuthenticationOnly')

Remarks
A SQL Server Contributor can set or remove an Azure AD admin, but can't set the Azure Active Director y
authentication only setting. The SQL Security Manager can't set or remove an Azure AD admin, but can set
the Azure Active Director y authentication only setting. Only accounts with higher Azure RBAC roles or
custom roles that contain both permissions can set or remove an Azure AD admin and set the Azure Active
Director y authentication only setting. One such role is the Contributor role.
After enabling or disabling Azure Active Director y authentication only in the Azure portal, an Activity
log entry can be seen in the SQL ser ver menu.

The Azure Active Director y authentication only setting can only be enabled or disabled by users with
the right permissions if the Azure Active Director y admin is specified. If the Azure AD admin isn't set, the
Azure Active Director y authentication only setting remains inactive and cannot be enabled or disabled.
Using APIs to enable Azure AD-only authentication will also fail if the Azure AD admin hasn't been set.
Changing an Azure AD admin when Azure AD-only authentication is enabled is supported for users with the
appropriate permissions.
Changing an Azure AD admin and enabling or disabling Azure AD-only authentication is allowed in the Azure
portal for users with the appropriate permissions. Both operations can be completed with one Save in the
Azure portal. The Azure AD admin must be set in order to enable Azure AD-only authentication.
Removing an Azure AD admin when the Azure AD-only authentication feature is enabled isn't supported.
Using an API to remove an Azure AD admin will fail if Azure AD-only authentication is enabled.
If the Azure Active Director y authentication only setting is enabled, the Remove admin button
is inactive in the Azure portal.
Removing an Azure AD admin and disabling the Azure Active Director y authentication only setting is
allowed, but requires the right user permission to complete the operations. Both operations can be
completed with one Save in the Azure portal.
Azure AD users with proper permissions can impersonate existing SQL users.
Impersonation continues working between SQL authentication users even when the Azure AD-only
authentication feature is enabled.
Limitations for Azure AD-only authentication in SQL Database
When Azure AD-only authentication is enabled for SQL Database, the following features aren't supported:
Azure SQL Database server roles are supported for Azure AD server principals, but not if the Azure AD login
is a group.
Elastic jobs
SQL Data Sync
Change data capture (CDC) - If you create a database in Azure SQL Database as an Azure AD user and enable
change data capture on it, a SQL user will not be able to disable or make changes to CDC artifacts. However,
another Azure AD user will be able to enable or disable CDC on the same database. Similarly, if you create an
Azure SQL Database as a SQL user, enabling or disabling CDC as an Azure AD user won't work
Transactional replication - Since SQL authentication is required for connectivity between replication
participants, when Azure AD-only authentication is enabled, transactional replication is not supported for
SQL Database for scenarios where transactional replication is used to push changes made in an Azure SQL
Managed Instance, on-premises SQL Server, or an Azure VM SQL Server instance to a database in Azure SQL
Database
SQL Insights (preview)
EXEC AS statement for Azure AD group member accounts
Limitations for Azure AD-only authentication in Managed Instance
When Azure AD-only authentication is enabled for Managed Instance, the following features aren't supported:
Transactional replication
SQL Agent Jobs in Managed Instance supports Azure AD-only authentication. However, the Azure AD user
who is a member of an Azure AD group that has access to the managed instance cannot own SQL Agent Jobs
SQL Insights (preview)
EXEC AS statement for Azure AD group member accounts
For more limitations, see T-SQL differences between SQL Server & Azure SQL Managed Instance.

Next steps
Tutorial: Enable Azure Active Directory only authentication with Azure SQL
Create server with Azure AD-only authentication enabled in Azure SQL
Azure Policy for Azure Active Directory only
authentication with Azure SQL
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure Policy can enforce the creation of an Azure SQL Database or Azure SQL Managed Instance with Azure AD-
only authentication enabled during provisioning. With this policy in place, any attempts to create a logical server
in Azure or managed instance will fail if it isn't created with Azure AD-only authentication enabled.
The Azure Policy can be applied to the whole Azure subscription, or just within a resource group.
Two new built-in policies have been introduced in Azure Policy:
Azure SQL Database should have Azure Active Directory Only Authentication enabled
Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled
For more information on Azure Policy, see What is Azure Policy? and Azure Policy definition structure.

Permissions
For an overview of the permissions needed to manage Azure Policy, see Azure RBAC permissions in Azure
Policy.
Actions
If you're using a custom role to manage Azure Policy, the following Actions are needed.
*/read
Microsoft.Authorization/policyassignments/*
Microsoft.Authorization/policydefinitions/*
Microsoft.Authorization/policyexemptions/*
Microsoft.Authorization/policysetdefinitions/*
Microsoft.PolicyInsights/*
For more information on custom roles, see Azure custom roles.

Manage Azure Policy for Azure AD-only authentication


The Azure AD-only authentication policies can be managed by going to the Azure portal, and searching for the
Policy service. Under Definitions , search for Azure Active Directory only authentication.

For a guide on how to add an Azure Policy for Azure AD-only authentication, see Using Azure Policy to enforce
Azure Active Directory only authentication with Azure SQL.
There are three effects for these policies:
Audit - The default setting, and will only capture an audit report in the Azure Policy activity logs
Deny - Prevents logical server or managed instance creation without Azure AD-only authentication enabled
Disabled - Will disable the policy, and won't restrict users from creating a logical server or managed
instance without Azure AD-only authentication enabled
If the Azure Policy for Azure AD-only authentication is set to Deny , Azure SQL logical server or managed
instance creation will fail. The details of this failure will be recorded in the Activity log of the resource group.

Policy compliance
You can view the Compliance setting under the Policy service to see the compliance state. The Compliance
state will tell you whether the server or managed instance is currently in compliance with having Azure AD-
only authentication enabled.
The Azure Policy can prevent a new logical server or managed instance from being created without having
Azure AD-only authentication enabled, but the feature can be changed after server or managed instance
creation. If a user has disabled Azure AD-only authentication after the server or managed instance was created,
the compliance state will be Non-compliant if the Azure Policy is set to Deny .

Limitations
Azure Policy enforces Azure AD-only authentication during logical server or managed instance creation.
Once the server is created, authorized Azure AD users with special roles (for example, SQL Security Manager)
can disable the Azure AD-only authentication feature. The Azure Policy allows it, but in this case, the server or
managed instance will be listed in the compliance report as Non-compliant and the report will indicate the
server or managed instance name.
For more remarks, known issues, and permissions needed, see Azure AD-only authentication.

Next steps
Using Azure Policy to enforce Azure Active Directory only authentication with Azure SQL
User-assigned managed identity in Azure AD for
Azure SQL
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure Active Directory (AD) supports two types of managed identities: System-assigned managed identity (SMI)
and user-assigned managed identity (UMI). For more information, see Managed identity types.
A system-assigned managed identity is automatically assigned to a managed instance when it is created. When
using Azure AD authentication with Azure SQL Managed Instance, a managed identity must be assigned to the
server identity. Previously, only a system-assigned managed identity could be assigned to the Managed Instance
or SQL Database server identity. With support for user-assigned managed identity, the UMI can be assigned to
Azure SQL Managed Instance or Azure SQL Database as the instance or server identity. This feature is now
supported for SQL Database.
In addition to using UMI and SMI as the server or instance identity, the UMI and SMI can be used to access the
database using the SQL connection string option Authentication=Active Directory Managed Identity . For more
information, see Using Azure Active Directory authentication with SqlClient. There will need to be a SQL user
mapped to the managed identity in the target database.

Benefits of using user-assigned managed identities


There are several benefits of using UMI as a server identity.
User flexibility to create and maintain their own user-assigned managed identities for a given tenant. UMI can
be used as server identities for Azure SQL. UMI is managed by the user, compared to an SMI, which identity
is uniquely defined per server, and assigned by the system.
In the past, the Azure AD Directory Readers role was required when using SMI as the server or instance
identity. With the introduction of accessing Azure AD using Microsoft Graph, users concerned with giving
high-level permissions such as the Directory Readers role to the SMI or UMI can alternatively give lower-
level permissions so that the server or instance identity can access Microsoft Graph. For more information on
providing Directory Readers permissions and it's function, see Directory Readers role in Azure Active
Directory for Azure SQL.
Users can choose a specific UMI to be the server or instance identity for all SQL Databases or Managed
Instances in the tenant, or have multiple UMIs assigned to different servers or instances. For example,
different UMIs can be used in different servers representing different features. For example, a UMI serving
transparent data encryption in one server, and a UMI serving Azure AD authentication in another server.
UMI is needed to create an Azure SQL logical server configured with transparent data encryption (TDE) with
customer-managed keys (CMK). For more information, see Customer-managed transparent data encryption
using user-assigned managed identity.
User-assigned managed identities are independent from logical servers or managed instances. When a
logical server or instance is deleted, the system-assigned managed identity is deleted as well. User-assigned
managed identities aren't deleted with the server.
NOTE
The instance identity (SMI or UMI) must be enabled to allow support for Azure AD authentication in Managed Instance.
For SQL Database, enabling the server identity is optional and required only if an Azure AD service principal (Azure AD
application) oversees creating and managing Azure AD users, groups, or application in the server. For more information,
see Azure Active Directory service principal with Azure SQL.

Creating a user-assigned managed identity


For information on how to create a user-assigned managed identity, see Manage user-assigned managed
identities.

Permissions
Once the UMI is created, some permissions are needed to allow the UMI to read from Microsoft Graph as the
server identity. Grant the permissions below, or give the UMI the Directory Readers role. These permissions
should be granted before provisioning an Azure SQL logical server or managed instance. Once the permissions
are granted to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a
server identity.

IMPORTANT
Only a Global Administrator or Privileged Role Administrator can grant these permissions.

User.Read.All - allows access to Azure AD user information


GroupMember.Read.All – allows access to Azure AD group information
Application.Read.ALL – allows access to Azure AD service principal (applications) information
Grant permissions
The following is a sample PowerShell script that will grant the necessary permissions for UMI or SMI. This
sample will assign permissions to the UMI umiservertest . To execute the script, you must sign in as a user with
a "Global Administrator" or "Privileged Role Administrator" role, and have the following Microsoft Graph
permissions:
User.Read.All
GroupMember.Read.All
Application.Read.ALL
# Script to assign permissions to the UMI "umiservertest"

import-module AzureAD
$tenantId = '<tenantId>' # Your Azure AD tenant ID

Connect-AzureAD -TenantID $tenantId


# Login as a user with a "Global Administrator" or "Privileged Role Administrator" role
# Script to assign permissions to existing UMI
# The following Microsoft Graph permissions are required:
# User.Read.All
# GroupMember.Read.All
# Application.Read.ALL

# Search for Microsoft Graph


$AAD_SP = Get-AzureADServicePrincipal -SearchString "Microsoft Graph";
$AAD_SP
# Use Microsoft Graph; in this example, this is the first element $AAD_SP[0]

#Output

#ObjectId AppId DisplayName


#-------- ----- -----------
#47d73278-e43c-4cc2-a606-c500b66883ef 00000003-0000-0000-c000-000000000000 Microsoft Graph
#44e2d3f6-97c3-4bc7-9ccd-e26746638b6d 0bf30f3b-4a52-48df-9a82-234910c4a086 Microsoft Graph #Change

$MSIName = "<managedIdentity>"; # Name of your user-assigned or system-assigned managed identity


$MSI = Get-AzureADServicePrincipal -SearchString $MSIName
if($MSI.Count -gt 1)
{
Write-Output "More than 1 principal found, please find your principal and copy the right object ID. Now use
the syntax $MSI = Get-AzureADServicePrincipal -ObjectId <your_object_id>"

# Choose the right UMI or SMI

Exit
}

# If you have more UMIs with similar names, you have to use the proper $MSI[ ]array number

# Assign the app roles

$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "User.Read.All"}


New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId
$AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id
$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "GroupMember.Read.All"}
New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId
$AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id
$AAD_AppRole = $AAD_SP.AppRoles | Where-Object {$_.Value -eq "Application.Read.All"}
New-AzureADServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId
$AAD_SP.ObjectId[0] -Id $AAD_AppRole.Id

In the final steps of the script, if you have more UMIs with similar names, you have to use the proper
$MSI[ ]array number, for example, $AAD_SP.ObjectId[0] .

Check permissions for user-assigned manage identity


To check permissions for a UMI, go to the Azure portal. In the Azure Active Director y resource, go to
Enterprise applications . Select All Applications for the Application type , and search for the UMI that was
created.
Select the UMI, and go to the Permissions settings under Security .

Managing a managed identity for a server or instance


To create an Azure SQL logical server with a user-assigned managed identity, see the following guide: Create an
Azure SQL logical server using a user-assigned managed identity
Set managed identities in the Azure portal
To set the identity for the SQL server or SQL managed instance in the Azure portal:
1. Go to your SQL ser ver or SQL managed instance resource.
2. Under Security , select the Identity setting.
3. Under User assigned managed identity , select Add .
4. Select the desired Subscription and then under User assigned managed identities select the desired
user assigned managed identity from the selected subscription. Then select the Select button.
Create or set a managed identity using the Azure CLI
The Azure CLI 2.26.0 (or higher) is required to run these commands with UMI.
Azure SQL Database
To provision a new server with UMI, use the az sql server create command.
To obtain the UMI server information, use the az sql server show command.
To update the UMI server setting, use the az sql server update command.
Azure SQL Managed Instance
To provision a new managed instance with UMI, use the az sql mi create command.
To obtain the UMI managed instance information, use the az sql server show command.
To update the UMI managed instance setting, use the az sql mi update command.
Create or set a managed identity using PowerShell
Az.Sql module 3.4 or greater is required when using PowerShell with UMI.
Azure SQL Database
To provision a new server with UMI, use the New-AzSqlServer command.
To obtain the UMI server information, use the Get-AzSqlServer command.
To update the UMI server setting, use the Set-AzSqlServer command.
Azure SQL Managed Instance
To provision a new managed instance with UMI, use the New-AzSqlInstance command.
To obtain the UMI managed instance information, use the Get-AzSqlInstance command.
To update the UMI managed instance setting, use the Set-AzSqlInstance command.
Create or set a managed identity using REST API
The REST API provisioning script used in Creating an Azure SQL logical server using a user-assigned managed
identity or Create an Azure SQL Managed Instance with a user-assigned managed identity can also be used to
update the UMI settings for the server. Rerun the provisioning command in the guide with the updated user-
assigned managed identity property that you want to update.
Create or set a managed identity using an ARM template
The ARM template used in Creating an Azure SQL logical server using a user-assigned managed identity or
Create an Azure SQL Managed Instance with a user-assigned managed identity can also be used to update the
UMI settings for the server. Rerun the provisioning command in the guide with the updated user-assigned
managed identity property that you want to update.

NOTE
You can't change the SQL server administrator or password, nor the Azure AD admin by re-running the provisioning
command for the ARM template.

Limitations and known issues


After a Managed Instance is created, the Active Director y admin blade in the Azure portal shows a
warning:
Managed Instance needs permissions to access Azure Active Directory. Click here to grant "Read"
permissions to your Managed Instance.
If the user-assigned managed identity was given the appropriate permissions discussed in the above
Permissions section, this warning can be ignored.
If a system-assigned or user-assigned managed identity is used as the server or instance identity, deleting
the identity will result in the server or instance inability to access Microsoft Graph. Azure AD authentication
and other functions will fail. To restore Azure AD functionality, a new SMI or UMI must be assigned to the
server with appropriate permissions.
Permissions to access Microsoft Graph using UMI or SMI can only be granted using PowerShell. These
permissions can't be granted using the Azure portal.

Next steps
Create an Azure SQL logical server using a user-assigned managed identity
Create an Azure SQL Managed Instance with a user-assigned managed identity
Transparent data encryption for SQL Database, SQL
Managed Instance, and Azure Synapse Analytics
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. It performs real-time
encryption and decryption of the database, associated backups, and transaction log files at rest without
requiring changes to the application. By default, TDE is enabled for all newly deployed Azure SQL Databases and
must be manually enabled for older databases of Azure SQL Database. For Azure SQL Managed Instance, TDE is
enabled at the instance level and newly created databases. TDE must be manually enabled for Azure Synapse
Analytics.

NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.

NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.

TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted
when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an
entire database by using a symmetric key called the Database Encryption Key (DEK). On database startup, the
encrypted DEK is decrypted and then used for decryption and re-encryption of the database files in the SQL
Server database engine process. DEK is protected by the TDE protector. TDE protector is either a service-
managed certificate (service-managed transparent data encryption) or an asymmetric key stored in Azure Key
Vault (customer-managed transparent data encryption).
For Azure SQL Database and Azure Synapse, the TDE protector is set at the server level and is inherited by all
databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance
level and it is inherited by all encrypted databases on that instance. The term server refers both to server and
instance throughout this document, unless stated differently.

IMPORTANT
All newly created databases in SQL Database are encrypted by default by using service-managed transparent data
encryption. Existing SQL databases created before May 2017 and SQL databases created through restore, geo-replication,
and database copy are not encrypted by default. Existing SQL Managed Instance databases created before February 2019
are not encrypted by default. SQL Managed Instance databases created through restore inherit encryption status from
the source. To restore an existing TDE-encrypted database, the required TDE certificate must first be imported into the
SQL Managed Instance.
NOTE
TDE cannot be used to encrypt system databases, such as the master database, in Azure SQL Database and Azure SQL
Managed Instance. The master database contains objects that are needed to perform the TDE operations on the user
databases. It is recommended to not store any sensitive data in the system databases. Infrastructure encryption is now
being rolled out which encrypts the system databases including master.

Service-managed transparent data encryption


In Azure, the default setting for TDE is that the DEK is protected by a built-in server certificate. The built-in server
certificate is unique for each server and the encryption algorithm used is AES 256. If a database is in a geo-
replication relationship, both the primary and geo-secondary databases are protected by the primary database's
parent server key. If two databases are connected to the same server, they also share the same built-in
certificate. Microsoft automatically rotates these certificates in compliance with the internal security policy and
the root key is protected by a Microsoft internal secret store. Customers can verify SQL Database and SQL
Managed Instance compliance with internal security policies in independent third-party audit reports available
on the Microsoft Trust Center.
Microsoft also seamlessly moves and manages the keys as needed for geo-replication and restores.

Customer-managed transparent data encryption - Bring Your Own


Key
Customer-managed TDE is also referred to as Bring Your Own Key (BYOK) support for TDE. In this scenario, the
TDE Protector that encrypts the DEK is a customer-managed asymmetric key, which is stored in a customer-
owned and managed Azure Key Vault (Azure's cloud-based external key management system) and never leaves
the key vault. The TDE Protector can be generated by the key vault or transferred to the key vault from an on-
premises hardware security module (HSM) device. SQL Database, SQL Managed Instance, and Azure Synapse
need to be granted permissions to the customer-owned key vault to decrypt and encrypt the DEK. If permissions
of the server to the key vault are revoked, a database will be inaccessible, and all data is encrypted.
With TDE with Azure Key Vault integration, users can control key management tasks including key rotations, key
vault permissions, key backups, and enable auditing/reporting on all TDE protectors using Azure Key Vault
functionality. Key Vault provides central key management, leverages tightly monitored HSMs, and enables
separation of duties between management of keys and data to help meet compliance with security policies. To
learn more about BYOK for Azure SQL Database and Azure Synapse, see Transparent data encryption with Azure
Key Vault integration.
To start using TDE with Azure Key Vault integration, see the how-to guide Turn on transparent data encryption
by using your own key from Key Vault.

Move a transparent data encryption-protected database


You don't need to decrypt databases for operations within Azure. The TDE settings on the source database or
primary database are transparently inherited on the target. Operations that are included involve:
Geo-restore
Self-service point-in-time restore
Restoration of a deleted database
Active geo-replication
Creation of a database copy
Restore of backup file to Azure SQL Managed Instance
IMPORTANT
Taking manual COPY-ONLY backup of a database encrypted by service-managed TDE is not supported in Azure SQL
Managed Instance, since the certificate used for encryption is not accessible. Use point-in-time-restore feature to move
this type of database to another SQL Managed Instance, or switch to customer-managed key.

When you export a TDE-protected database, the exported content of the database isn't encrypted. This exported
content is stored in unencrypted BACPAC files. Be sure to protect the BACPAC files appropriately and enable TDE
after import of the new database is finished.
For example, if the BACPAC file is exported from a SQL Server instance, the imported content of the new
database isn't automatically encrypted. Likewise, if the BACPAC file is imported to a SQL Server instance, the
new database also isn't automatically encrypted.
The one exception is when you export a database to and from SQL Database. TDE is enabled on the new
database, but the BACPAC file itself still isn't encrypted.

Manage transparent data encryption


The Azure portal
PowerShell
Transact-SQL
REST API

Manage TDE in the Azure portal.


To configure TDE through the Azure portal, you must be connected as the Azure Owner, Contributor, or SQL
Security Manager.
Enable and disable TDE on the database level. For Azure SQL Managed Instance use Transact-SQL (T-SQL) to
turn TDE on and off on a database. For Azure SQL Database and Azure Synapse, you can manage TDE for the
database in the Azure portal after you've signed in with the Azure Administrator or Contributor account. Find
the TDE settings under your user database. By default, service-managed transparent data encryption is used. A
TDE certificate is automatically generated for the server that contains the database.

You set the TDE master key, known as the TDE protector, at the server or instance level. To use TDE with BYOK
support and protect your databases with a key from Key Vault, open the TDE settings under your server.
Azure SQL transparent data encryption with
customer-managed key
7/12/2022 • 19 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL transparent data encryption (TDE) with customer-managed key enables Bring Your Own Key (BYOK)
scenario for data protection at rest, and allows organizations to implement separation of duties in the
management of keys and data. With customer-managed TDE, customer is responsible for and in a full control of
a key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of
operations on keys.
In this scenario, the key used for encryption of the Database Encryption Key (DEK), called TDE protector, is a
customer-managed asymmetric key stored in a customer-owned and customer-managed Azure Key Vault (AKV),
a cloud-based external key management system. Key Vault is highly available and scalable secure storage for
RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs).
It doesn't allow direct access to a stored key, but provides services of encryption/decryption using the key to the
authorized entities. The key can be generated by the key vault, imported, or transferred to the key vault from an
on-prem HSM device.
For Azure SQL Database and Azure Synapse Analytics, the TDE protector is set at the server level and is inherited
by all encrypted databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set
at the instance level and is inherited by all encrypted databases on that instance. The term server refers both to
a server in SQL Database and Azure Synapse and to a managed instance in SQL Managed Instance throughout
this document, unless stated differently.

NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on transparent data encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.

IMPORTANT
For those using service-managed TDE who would like to start using customer-managed TDE, data remains encrypted
during the process of switching over, and there is no downtime nor re-encryption of the database files. Switching from a
service-managed key to a customer-managed key only requires re-encryption of the DEK, which is a fast and online
operation.

NOTE
To provide Azure SQL customers with two layers of encryption of data at rest, infrastructure encryption (using AES-256
encryption algorithm) with platform managed keys is being rolled out. This provides an addition layer of encryption at
rest along with TDE with customer-managed keys, which is already available. For Azure SQL Database and Managed
Instance, all databases, including the master database and other system databases, will be encrypted when infrastructure
encryption is turned on. At this time, customers must request access to this capability. If you are interested in this
capability, contact AzureSQLDoubleEncryptionAtRest@service.microsoft.com.
Benefits of the customer-managed TDE
Customer-managed TDE provides the following benefits to the customer:
Full and granular control over usage and management of the TDE protector;
Transparency of the TDE protector usage;
Ability to implement separation of duties in the management of keys and data within the organization;
Key Vault administrator can revoke key access permissions to make encrypted database inaccessible;
Central management of keys in AKV;
Greater trust from your end customers, since AKV is designed such that Microsoft can't see nor extract
encryption keys;

How customer-managed TDE works

In order for the Azure SQL server to use TDE protector stored in AKV for encryption of the DEK, the key vault
administrator needs to give the following access rights to the server using its unique Azure Active Directory
(Azure AD) identity:
get - for retrieving the public part and properties of the key in the Key Vault
wrapKey - to be able to protect (encrypt) DEK
unwrapKey - to be able to unprotect (decrypt) DEK
Key vault administrator can also enable logging of key vault audit events, so they can be audited later.
When server is configured to use a TDE protector from AKV, the server sends the DEK of each TDE-enabled
database to the key vault for encryption. Key vault returns the encrypted DEK, which is then stored in the user
database.
When needed, server sends protected DEK to the key vault for decryption.
Auditors can use Azure Monitor to review key vault AuditEvent logs, if logging is enabled.
NOTE
It may take around 10 minutes for any permission changes to take effect for the key vault. This includes revoking access
permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.

Requirements for configuring customer-managed TDE


Requirements for configuring AKV
Key vault and SQL Database/managed instance must belong to the same Azure Active Directory tenant.
Cross-tenant key vault and server interactions aren't supported. To move resources afterwards, TDE with
AKV will have to be reconfigured. Learn more about moving resources.
Soft-delete and purge protection features must be enabled on the key vault to protect from data loss due
to accidental key (or key vault) deletion.
Grant the server or managed instance access to the key vault (get, wrapKey, unwrapKey) using its Azure
Active Directory identity. The server identity can be a system-assigned managed identity or a user-
assigned managed identity assigned to the server. When using the Azure portal, the Azure AD identity
gets automatically created when the server is created. When using PowerShell or Azure CLI, the Azure AD
identity must be explicitly created and should be verified. See Configure TDE with BYOK and Configure
TDE with BYOK for SQL Managed Instance for detailed step-by-step instructions when using PowerShell.
Depending on the permission model of the key vault (access policy or Azure RBAC), key vault access
can be granted either by creating an access policy on the key vault, or by creating a new Azure RBAC
role assignment with the role Key Vault Crypto Service Encryption User.
When using firewall with AKV, you must enable option Allow trusted Microsoft services to bypass the
firewall.
Enable soft-delete and purge protection for AKV

IMPORTANT
Both soft-delete and purge protection must be enabled on the key vault when configuring customer-managed TDE
on a new or existing server or managed instance.

Soft-delete and purge protection are important features of Azure Key Vault that allow recovery of deleted vaults
and deleted key vault objects, reducing the risk of a user accidentally or maliciously deleting a key or a key vault.
Soft-deleted resources are retained for 90 days, unless recovered or purged by the customer. The recover
and purge actions have their own permissions associated in a key vault access policy. The soft-delete
feature is on by default for new key vaults and can also be enabled using the Azure portal, PowerShell or
Azure CLI.
Purge protection can be turned on using Azure CLI or PowerShell. When purge protection is enabled, a
vault or an object in the deleted state cannot be purged until the retention period has passed. The default
retention period is 90 days, but is configurable from 7 to 90 days through the Azure portal.
Azure SQL requires soft-delete and purge protection to be enabled on the key vault containing the
encryption key being used as the TDE Protector for the server or managed instance. This helps prevent
the scenario of accidental or malicious key vault or key deletion that can lead to the database going into
Inaccessible state.
When configuring the TDE Protector on an existing server or during server creation, Azure SQL validates
that the key vault being used has soft-delete and purge protection turned on. If soft-delete and purge
protection are not enabled on the key vault, the TDE Protector setup fails with an error. In this case, soft-
delete and purge protection must first be enabled on the key vault and then the TDE Protector setup
should be performed.
Requirements for configuring TDE protector
TDE protector can only be an asymmetric, RSA, or RSA HSM key. The supported key lengths are 2048
bytes and 3072 bytes.
The key activation date (if set) must be a date and time in the past. Expiration date (if set) must be a future
date and time.
The key must be in the Enabled state.
If you're importing existing key into the key vault, make sure to provide it in the supported file formats (
.pfx , .byok , or .backup ).

NOTE
Azure SQL now supports using a RSA key stored in a Managed HSM as TDE Protector. Azure Key Vault Managed HSM is
a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard
cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about Managed
HSMs.

NOTE
An issue with Thales CipherTrust Manager versions prior to v2.8.0 prevents keys newly imported into Azure Key Vault
from being used with Azure SQL Database or Azure SQL Managed Instance for customer-managed TDE scenarios. More
details about this issue can be found here. For such cases, please wait 24 hours after importing the key into key vault to
begin using it as TDE Protector for the server or managed instance. This issue has been resolved in Thales CipherTrust
Manager v2.8.0.

Recommendations when configuring customer-managed TDE


Recommendations when configuring AKV
Associate at most 500 General Purpose or 200 Business Critical databases in total with a key vault in a
single subscription to ensure high availability when server accesses the TDE protector in the key vault.
These figures are based on the experience and documented in the key vault service limits. The intention
here is to prevent issues after server failover, as it will trigger as many key operations against the vault as
there are databases in that server.
Set a resource lock on the key vault to control who can delete this critical resource and prevent accidental
or unauthorized deletion. Learn more about resource locks.
Enable auditing and reporting on all encryption keys: Key vault provides logs that are easy to inject into
other security information and event management tools. Operations Management Suite Log Analytics is
one example of a service that is already integrated.
Link each server with two key vaults that reside in different regions and hold the same key material, to
ensure high availability of encrypted databases. Mark the key from one of the key vaults as the TDE
protector. System will automatically switch to the key vault in the second region with the same key
material, if there's an outage affecting the key vault in the first region.
NOTE
To allow greater flexibility in configuring customer-managed TDE, Azure SQL Database server and Managed Instance in
one region can now be linked to key vault in any other region. The server and key vault do not have to be co-located in
the same region.

Recommendations when configuring TDE protector


Keep a copy of the TDE protector on a secure place or escrow it to the escrow service.
If the key is generated in the key vault, create a key backup before using the key in AKV for the first time.
Backup can be restored to an Azure Key Vault only. Learn more about the Backup-AzKeyVaultKey
command.
Create a new backup whenever any changes are made to the key (for example, key attributes, tags, ACLs).
Keep previous versions of the key in the key vault when rotating keys, so older database backups can
be restored. When the TDE protector is changed for a database, old backups of the database are not
updated to use the latest TDE protector. At restore time, each backup needs the TDE protector it was
encrypted with at creation time. Key rotations can be performed following the instructions at Rotate the
transparent data encryption Protector Using PowerShell.
Keep all previously used keys in AKV even after switching to service-managed keys. It ensures database
backups can be restored with the TDE protectors stored in AKV. TDE protectors created with Azure Key
Vault have to be maintained until all remaining stored backups have been created with service-managed
keys. Make recoverable backup copies of these keys using Backup-AzKeyVaultKey.
To remove a potentially compromised key during a security incident without the risk of data loss, follow
the steps from the Remove a potentially compromised key.

Inaccessible TDE protector


When TDE is configured to use a customer-managed key, continuous access to the TDE protector is required for
the database to stay online. If the server loses access to the customer-managed TDE protector in AKV, in up to 10
minutes a database will start denying all connections with the corresponding error message and change its state
to Inaccessible. The only action allowed on a database in the Inaccessible state is deleting it.

NOTE
If the database is inaccessible due to an intermittent networking outage, there is no action required and the databases will
come back online automatically.

After access to the key is restored, taking database back online requires extra time and steps, which may vary
based on the time elapsed without access to the key and the size of the data in the database:
If key access is restored within 30 minutes, the database will autoheal within next hour.
If key access is restored after more than 30 minutes, autoheal is not possible and bringing back the
database requires extra steps on the portal and can take a significant amount of time depending on the
size of the database. Once the database is back online, previously configured server-level settings such as
failover group configuration, point-in-time-restore history, and tags will be lost . Therefore, it's
recommended implementing a notification system that allows you to identify and address the underlying
key access issues within 30 minutes.
Below is a view of the extra steps required on the portal to bring an inaccessible database back online.
Accidental TDE protector access revocation
It may happen that someone with sufficient access rights to the key vault accidentally disables server access to
the key by:
revoking the key vault's get, wrapKey, unwrapKey permissions from the server
deleting the key
deleting the key vault
changing the key vault's firewall rules
deleting the managed identity of the server in Azure Active Directory
Learn more about the common causes for database to become inaccessible.

Monitoring of the customer-managed TDE


To monitor database state and to enable alerting for loss of TDE protector access, configure the following Azure
features:
Azure Resource Health. An inaccessible database that has lost access to the TDE protector will show as
"Unavailable" after the first connection to the database has been denied.
Activity Log when access to the TDE protector in the customer-managed key vault fails, entries are added to
the activity log. Creating alerts for these events will enable you to reinstate access as soon as possible.
Action Groups can be defined to send you notifications and alerts based on your preferences, for example,
Email/SMS/Push/Voice, Logic App, Webhook, ITSM, or Automation Runbook.

Database backup and restore with customer-managed TDE


Once a database is encrypted with TDE using a key from Key Vault, any newly generated backups are also
encrypted with the same TDE protector. When the TDE protector is changed, old backups of the database are
not updated to use the latest TDE protector.
To restore a backup encrypted with a TDE protector from Key Vault, make sure that the key material is available
to the target server. Therefore, we recommend that you keep all the old versions of the TDE protector in key
vault, so database backups can be restored.

IMPORTANT
At any moment there can be not more than one TDE protector set for a server. It's the key marked with "Make the key
the default TDE protector" in the Azure portal blade. However, multiple additional keys can be linked to a server without
marking them as a TDE protector. These keys are not used for protecting DEK, but can be used during restore from a
backup, if backup file is encrypted with the key with the corresponding thumbprint.

If the key that is needed for restoring a backup is no longer available to the target server, the following error
message is returned on the restore try: "Target server <Servername> does not have access to all AKV URIs
created between <Timestamp #1> and <Timestamp #2>. Retry operation after restoring all AKV URIs."
To mitigate it, run the Get-AzSqlServerKeyVaultKey cmdlet for the target server or Get-
AzSqlInstanceKeyVaultKey for the target managed instance to return the list of available keys and identify the
missing ones. To ensure all backups can be restored, make sure the target server for the restore has access to all
of keys needed. These keys don't need to be marked as TDE protector.
To learn more about backup recovery for SQL Database, see Recover a database in SQL Database. To learn more
about backup recovery for dedicated SQL pool in Azure Synapse Analytics, see Recover a dedicated SQL pool.
For SQL Server's native backup/restore with SQL Managed Instance, see Quickstart: Restore a database to SQL
Managed Instance
Another consideration for log files: Backed up log files remain encrypted with the original TDE protector, even if
it was rotated and the database is now using a new TDE protector. At restore time, both keys will be needed to
restore the database. If the log file is using a TDE protector stored in Azure Key Vault, this key will be needed at
restore time, even if the database has been changed to use service-managed TDE in the meantime.

High availability with customer-managed TDE


Even in cases when there's no configured geo-redundancy for server, it's highly recommended to configure the
server to use two different key vaults in two different regions with the same key material. The key in the
secondary key vault in the other region shouldn't be marked as TDE protector, and it's not even allowed. If
there's an outage affecting the primary key vault, and only then, the system will automatically switch to the
other linked key with the same thumbprint in the secondary key vault, if it exists. Note though that switch won't
happen if TDE protector is inaccessible because of revoked access rights, or because key or key vault is deleted,
as it may indicate that customer intentionally wanted to restrict server from accessing the key. Providing the
same key material to two key vaults in different regions can be done by creating the key outside of the key vault,
and importing them into both key vaults.
Alternatively, it can be accomplished by generating key using the primary key vault in one region and cloning
the key into a key vault in a different Azure region. Use the Backup-AzKeyVaultKey cmdlet to retrieve the key in
encrypted format from the primary key vault and then use the Restore-AzKeyVaultKey cmdlet and specify a key
vault in the second region to clone the key. Alternatively, use the Azure portal to back up and restore the key. Key
backup/restore operation is only allowed between key vaults within the same Azure subscription and Azure
geography.
Geo-DR and customer-managed TDE
In both active geo-replication and failover groups scenarios, the primary and secondary servers involved can be
linked either to the same key vault (in any region) or to separate key vaults. If separate key vaults are linked to
the primary and secondary servers, customer is responsible for keeping the key material across the key vaults
consistent, so that geo-secondary is in sync and can take over using the same key from its linked key vault if
primary becomes inaccessible due to an outage in the region and a failover is triggered. Up to four secondaries
can be configured, and chaining (secondaries of secondaries) isn't supported.
To avoid issues while establishing or during geo-replication due to incomplete key material, it's important to
follow these rules when configuring customer-managed TDE (if separate key vaults are used for the primary and
secondary servers):
All key vaults involved must have same properties, and same access rights for respective servers.
All key vaults involved must contain identical key material. It applies not just to the current TDE protector,
but to the all previous TDE protectors that may be used in the backup files.
Both initial setup and rotation of the TDE protector must be done on the secondary first, and then on
primary.

To test a failover, follow the steps in Active geo-replication overview. Testing failover should be done regularly to
validate that SQL Database has maintained access permission to both key vaults.
Azure SQL Database ser ver and Managed Instance in one region can now be linked to key vault in
any other region. The server and key vault do not have to be co-located in the same region. With this, for
simplicity, the primary and secondary servers can be connected to the same key vault (in any region). This will
help avoid scenarios where key material may be out of sync if separate key vaults are used for both the servers.
Azure Key Vault has multiple layers of redundancy in place to make sure that your keys and key vaults remain
available in case of service or region failures. Azure Key Vault availability and redundancy
Azure Policy for customer-managed TDE
Azure Policy can be used to enforce customer-managed TDE during the creation or update of an Azure SQL
Database server or Azure SQL Managed Instance. With this policy in place, any attempts to create or update a
logical server in Azure or managed instance will fail if it isn't configured with a customer-managed key. The
Azure Policy can be applied to the whole Azure subscription, or just within a resource group.
For more information on Azure Policy, see What is Azure Policy? and Azure Policy definition structure.
The following two built-in policies are supported for customer-managed TDE in Azure Policy:
SQL servers should use customer-managed keys to encrypt data at rest
SQL managed instances should use customer-managed keys to encrypt data at rest
The customer-managed TDE policy can be managed by going to the Azure portal, and searching for the Policy
service. Under Definitions , search for customer-managed key.
There are three effects for these policies:
Audit - The default setting, and will only capture an audit report in the Azure Policy activity logs
Deny - Prevents logical server or managed instance creation or update without a customer-managed key
configured
Disabled - Will disable the policy, and won't restrict users from creating or updating a logical server or
managed instance without customer-managed TDE enabled
If the Azure Policy for customer-managed TDE is set to Deny , Azure SQL logical server or managed instance
creation will fail. The details of this failure will be recorded in the Activity log of the resource group.

IMPORTANT
Earlier versions of built-in policies for customer-managed TDE containing the AuditIfNotExist effect have been
deprecated. Existing policy assignments using the deprecated policies are not impacted and will continue to work as
before.

Next steps
You may also want to check the following PowerShell sample scripts for the common operations with customer-
managed TDE:
Rotate the transparent data encryption Protector for SQL Database
Remove a transparent data encryption (TDE) protector for SQL Database
Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell
Additionally, enable Microsoft Defender for SQL to secure your databases and their data, with functionalities for
discovering and mitigating potential database vulnerabilities, and detecting anomalous activities that could
indicate a threat to your databases.
Managed identities for transparent data encryption
with BYOK
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance

NOTE
Assigning a user-assigned managed identity for Azure SQL logical servers and Managed Instances is in public preview .

Managed identities in Azure Active Directory (Azure AD) provide Azure services with an automatically managed
identity in Azure AD. This identity can be used to authenticate to any service that supports Azure AD
authentication, such as Azure Key Vault, without any credentials in the code. For more information, see Managed
identity types in Azure.
Managed Identities can be of two types:
System-assigned
User-assigned
Enabling system-assigned managed identity for Azure SQL logical servers and Managed Instances are already
supported today. Assigning user-assigned managed identity to the server is now in public preview.
For TDE with customer-managed key (CMK) in Azure SQL, a managed identity on the server is used for
providing access rights to the server on the key vault. For instance, the system-assigned managed identity of the
server should be provided with key vault permissions prior to enabling TDE with CMK on the server.
In addition to the system-assigned managed identity that is already supported for TDE with CMK, a user-
assigned managed identity (UMI) that is assigned to the server can be used to allow the server to access the key
vault. A prerequisite to enable key vault access is to ensure the user-assigned managed identity has been
provided the Get, wrapKey and unwrapKey permissions on the key vault. Since the user-assigned managed
identity is a standalone resource that can be created and granted access to the key vault, TDE with a customer-
managed key can now be enabled at creation time for the server or database.

NOTE
For assigning a user-assigned managed identity to the logical server or managed instance, a user must have the SQL
Server Contributor or SQL Managed Instance Contributor Azure RBAC role along with any other Azure RBAC role
containing the Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action action.

Benefits of using UMI for customer-managed TDE


Enables the ability to pre-authorize key vault access for Azure SQL logical servers or managed instances
by creating a user-assigned managed identity, and granting it access to key vault, even before the server
or database has been created
Allows creation of an Azure SQL logical server with TDE and CMK enabled
Enables the same user-assigned managed identity to be assigned to multiple servers, eliminating the
need to individually turn on system-assigned managed identity for each Azure SQL logical server or
managed instance, and providing it access to key vault
Provides the capability to enforce CMK at server or database creation time with an available built-in
Azure policy

Considerations while using UMI for customer-managed TDE


By default, TDE in Azure SQL uses the primary user-assigned managed identity set on the server for key vault
access. If no user-assigned identities have been assigned to the server, then the system-assigned managed
identity of the server is used for key vault access.
When using the system-assigned managed identity for TDE with CMK, no user-assigned managed identities
should be assigned to the server
When using a user-assigned managed identity for TDE with CMK, assign the identity to the server and set it
as the primary identity for the server
The primary user-assigned managed identity requires continuous key vault access (get, wrapKey, unwrapKey
permissions). If the identity's access to key vault is revoked or sufficient permissions are not provided, the
database will move to Inaccessible state
If the primary user-assigned managed identity is being updated to a different user-assigned managed
identity, the new identity must be given required permissions to the key vault prior to updating the primary
To switch the server from user-assigned to system-assigned managed identity for key vault access, provide
the system-assigned managed identity with the required key vault permissions, then remove all user-
assigned managed identities from the server

IMPORTANT
The primary user-assigned managed identity being used for TDE with CMK should not be deleted from Azure. Deleting
this identity will lead to the server losing access to key vault and databases becoming inaccessible.

Limitations and known issues


If the key vault is behind a VNet, a user-assigned managed identity cannot be used with customer-managed
TDE. A system-assigned managed identity must be used in this case. A user-assigned managed identity can
only be used when the key vault is not behind a VNet.
When multiple user-assigned managed identities are assigned to the server or managed instance, if a single
identity is removed from the server using the Identity blade of the Azure Portal, the operation succeeds but
the identity does not get removed from the server. Removing all user-assigned managed identities together
from the Azure portal works successfully.
When the server or managed instance is configured with customer-managed TDE and both system-assigned
and user-assigned managed identities are enabled on the server, removing the user-assigned managed
identities from the server without first giving the system-assigned managed identity access to the key vault
results in an Unexpected error occurred message. Ensure the system-assigned managed identity has been
provided key vault access prior to removing the primary user-assigned managed identity (and any other
user-assigned managed identities) from the server.
User Assigned Managed Identity for SQL Managed Instance is currently not supported when AKV firewall is
enabled.

Next steps
Create Azure SQL database configured with user-assigned managed identity and customer-managed TDE
Overview of business continuity with Azure SQL
Database & Azure SQL Managed Instance
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Business continuity in Azure SQL Database and SQL Managed Instance refers to the mechanisms, policies,
and procedures that enable your business to continue operating in the face of disruption, particularly to its
computing infrastructure. In the most of the cases, SQL Database and SQL Managed Instance will handle the
disruptive events that might happen in the cloud environment and keep your applications and business
processes running. However, there are some disruptive events that cannot be handled by SQL Database
automatically such as:
User accidentally deleted or updated a row in a table.
Malicious attacker succeeded to delete data or drop a database.
Earthquake caused a power outage and temporarily disabled datacenter or any other catastrophic natural
disaster event.
This overview describes the capabilities that SQL Database and SQL Managed Instance provide for business
continuity and disaster recovery. Learn about options, recommendations, and tutorials for recovering from
disruptive events that could cause data loss or cause your database and application to become unavailable.
Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your
application requires maintenance.

SQL Database features that you can use to provide business


continuity
From a database perspective, there are four major potential disruption scenarios:
Local hardware or software failures affecting the database node such as a disk-drive failure.
Data corruption or deletion typically caused by an application bug or human error. Such failures are
application-specific and typically cannot be detected by the database service.
Datacenter outage, possibly caused by a natural disaster. This scenario requires some level of geo-
redundancy with application failover to an alternate datacenter.
Upgrade or maintenance errors, unanticipated issues that occur during planned infrastructure maintenance
or upgrades may require rapid rollback to a prior database state.
To mitigate the local hardware and software failures, SQL Database includes a high availability architecture,
which guarantees automatic recovery from these failures with up to 99.995% availability SLA.
To protect your business from data loss, SQL Database and SQL Managed Instance automatically create full
database backups weekly, differential database backups every 12 hours, and transaction log backups every 5 -
10 minutes. The backups are stored in RA-GRS storage for at least seven days for all service tiers. All service
tiers except Basic support configurable backup retention period for point-in-time restore, up to 35 days.
SQL Database and SQL Managed Instance also provide several business continuity features that you can use to
mitigate various unplanned scenarios.
Temporal tables enable you to restore row versions from any point in time.
Built-in automated backups and Point in Time Restore enables you to restore complete database to some
point in time within the configured retention period up to 35 days.
You can restore a deleted database to the point at which it was deleted if the ser ver has not been deleted .
Long-term backup retention enables you to keep the backups up to 10 years.
Active geo-replication enables you to create readable replicas and manually failover to any replica in case of
a datacenter outage or application upgrade.
Auto-failover group allows the application to automatically recover in case of a datacenter outage.

Recover a database within the same Azure region


You can use automatic database backups to restore a database to a point in time in the past. This way you can
recover from data corruptions caused by human errors. The point-in-time restore allows you to create a new
database in the same server that represents the state of data prior to the corrupting event. For most databases
the restore operations takes less than 12 hours. It may take longer to recover a very large or very active
database. For more information about recovery time, see database recovery time.
If the maximum supported backup retention period for point-in-time restore (PITR) is not sufficient for your
application, you can extend it by configuring a long-term retention (LTR) policy for the database(s). For more
information, see Long-term backup retention.

Compare geo-replication with failover groups


Auto-failover groups simplify the deployment and usage of geo-replication and add the additional capabilities
as described in the following table:

GEO - REP L IC AT IO N FA ILO VER GRO UP S

Automatic failover No Yes

Fail over multiple databases No Yes


simultaneously

User must update connection Yes No


string after failover

SQL Managed Instance suppor t No Yes

Can be in same region as primar y Yes No

Multiple replicas Yes No

Suppor ts read-scale Yes Yes

Recover a database to the existing server


Although rare, an Azure datacenter can have an outage. When an outage occurs, it causes a business disruption
that might only last a few minutes or might last for hours.
One option is to wait for your database to come back online when the datacenter outage is over. This works
for applications that can afford to have the database offline. For example, a development project or free trial
you don't need to work on constantly. When a datacenter has an outage, you do not know how long the
outage might last, so this option only works if you don't need your database for a while.
Another option is to restore a database on any server in any Azure region using geo-redundant database
backups (geo-restore). Geo-restore uses a geo-redundant backup as its source and can be used to recover a
database even if the database or datacenter is inaccessible due to an outage.
Finally, you can quickly recover from an outage if you have configured either geo-secondary using active
geo-replication or an auto-failover group for your database or databases. Depending on your choice of these
technologies, you can use either manual or automatic failover. While failover itself takes only a few seconds,
the service will take at least 1 hour to activate it. This is necessary to ensure that the failover is justified by the
scale of the outage. Also, the failover may result in small data loss due to the nature of asynchronous
replication.
As you develop your business continuity plan, you need to understand the maximum acceptable time before the
application fully recovers after the disruptive event. The time required for application to fully recover is known
as Recovery time objective (RTO). You also need to understand the maximum period of recent data updates
(time interval) the application can tolerate losing when recovering from an unplanned disruptive event. The
potential data loss is known as Recovery point objective (RPO).
Different recovery methods offer different levels of RPO and RTO. You can choose a specific recovery method, or
use a combination of methods to achieve full application recovery. The following table compares RPO and RTO
of each recovery option. Auto-failover groups simplify the deployment and usage of geo-replication, and add
the additional capabilities as described in the following table:

REC O VERY M ET H O D RTO RP O

Geo-restore from geo-replicated 12 h 1h


backups

Auto-failover groups 1h 5s

Manual database failover 30 s 5s

NOTE
Manual database failover refers to failover of a single database to its geo-replicated secondary using the unplanned
mode. See the table earlier in this article for details of the auto-failover RTO and RPO.

Use auto-failover groups if your application meets any of these criteria:


Is mission critical.
Has a service level agreement (SLA) that does not allow for 12 hours or more of downtime.
Downtime may result in financial liability.
Has a high rate of data change and 1 hour of data loss is not acceptable.
The additional cost of active geo-replication is lower than the potential financial liability and associated loss
of business.
You may choose to use a combination of database backups and active geo-replication depending upon your
application requirements. For a discussion of design considerations for stand-alone databases and for elastic
pools using these business continuity features, see Design an application for cloud disaster recovery and Elastic
pool disaster recovery strategies.
The following sections provide an overview of the steps to recover using either database backups or active geo-
replication. For detailed steps including planning requirements, post recovery steps, and information about how
to simulate an outage to perform a disaster recovery drill, see Recover a database in SQL Database from an
outage.
Prepare for an outage
Regardless of the business continuity feature you use, you must:
Identify and prepare the target server, including server-level IP firewall rules, logins, and master database
level permissions.
Determine how to redirect clients and client applications to the new server
Document other dependencies, such as auditing settings and alerts
If you do not prepare properly, bringing your applications online after a failover or a database recovery takes
additional time and likely also require troubleshooting at a time of stress - a bad combination.
Fail over to a geo -replicated secondary database
If you are using active geo-replication or auto-failover groups as your recovery mechanism, you can configure
an automatic failover policy or use manual unplanned failover. Once initiated, the failover causes the secondary
to become the new primary and ready to record new transactions and respond to queries - with minimal data
loss for the data not yet replicated. For information on designing the failover process, see Design an application
for cloud disaster recovery.

NOTE
When the datacenter comes back online the old primaries automatically reconnect to the new primary and become
secondary databases. If you need to relocate the primary back to the original region, you can initiate a planned failover
manually (failback).

Perform a geo -restore


If you are using the automated backups with geo-redundant storage (enabled by default), you can recover the
database using geo-restore. Recovery usually takes place within 12 hours - with data loss of up to one hour
determined by when the last log backup was taken and replicated. Until the recovery completes, the database is
unable to record any transactions or respond to any queries. Note, geo-restore only restores the database to the
last available point in time.

NOTE
If the datacenter comes back online before you switch your application over to the recovered database, you can cancel
the recovery.

Perform post failover / recovery tasks


After recovery from either recovery mechanism, you must perform the following additional tasks before your
users and applications are back up and running:
Redirect clients and client applications to the new server and restored database.
Ensure appropriate server-level IP firewall rules are in place for users to connect or use database-level
firewalls to enable appropriate rules.
Ensure appropriate logins and master database level permissions are in place (or use contained users).
Configure auditing, as appropriate.
Configure alerts, as appropriate.

NOTE
If you are using a failover group and connect to the databases using the read-write listener, the redirection after failover
will happen automatically and transparently to the application.

Upgrade an application with minimal downtime


Sometimes an application must be taken offline because of planned maintenance such as an application
upgrade. Manage application upgrades describes how to use active geo-replication to enable rolling upgrades
of your cloud application to minimize downtime during upgrades and provide a recovery path if something
goes wrong.

Next steps
For a discussion of application design considerations for single databases and for elastic pools, see Design an
application for cloud disaster recovery and Elastic pool disaster recovery strategies.
High availability for Azure SQL Database and SQL
Managed Instance
7/12/2022 • 16 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The goal of the high availability architecture in Azure SQL Database and SQL Managed Instance is to guarantee
that your database is up and running minimum of 99.99% of time without worrying about the impact of
maintenance operations and outages. For more information regarding specific SLA for different tiers, refer to
SLA for Azure SQL Database and SLA for Azure SQL Managed Instance.
Azure automatically handles critical servicing tasks, such as patching, backups, Windows and Azure SQL
upgrades, and unplanned events such as underlying hardware, software, or network failures. When the
underlying database in Azure SQL Database is patched or fails over, the downtime is not noticeable if you
employ retry logic in your app. SQL Database and SQL Managed Instance can quickly recover even in the most
critical circumstances ensuring that your data is always available.
The high availability solution is designed to ensure that committed data is never lost due to failures, that
maintenance operations do not affect your workload, and that the database will not be a single point of failure in
your software architecture. There are no maintenance windows or downtimes that should require you to stop
the workload while the database is upgraded or maintained.
There are two high availability architectural models:
Standard availability model that is based on a separation of compute and storage. It relies on high
availability and reliability of the remote storage tier. This architecture targets budget-oriented business
applications that can tolerate some performance degradation during maintenance activities.
Premium availability model that is based on a cluster of database engine processes. It relies on the fact
that there is always a quorum of available database engine nodes. This architecture targets mission-critical
applications with high IO performance, high transaction rate and guarantees minimal performance impact to
your workload during maintenance activities.
SQL Database and SQL Managed Instance both run on the latest stable version of the SQL Server database
engine and Windows operating system, and most users would not notice that upgrades are performed
continuously.

Basic, Standard, and General Purpose service tier locally redundant


availability
The Basic, Standard, and General Purpose service tiers use the standard availability architecture for both
serverless and provisioned compute. The following figure shows four different nodes with the separated
compute and storage layers.
The standard availability model includes two layers:
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe , controls health
of the node, and performs failover to another node if necessary.
A stateful data layer with the database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure blob
storage has built-in data availability and redundancy feature. It guarantees that every record in the log file or
page in the data file will be preserved even if sqlservr.exe process crashes.

Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service
Fabric will move the stateless sqlservr.exe process to another stateless compute node with sufficient free
capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly
initialized sqlservr.exe process. This process guarantees 99.99% availability, but a heavy workload may
experience some performance degradation during the transition since the new sqlservr.exe process starts with
cold cache.

General Purpose service tier zone redundant availability


Zone-redundant configuration for the General Purpose service tier is offered for both serverless and
provisioned compute. This configuration utilizes Azure Availability Zones to replicate databases across multiple
physical locations within an Azure region.By selecting zone-redundancy, you can make yournew and existing
serverless and provisioned generalpurpose single databases and elastic pools resilient to a much larger set of
failures, including catastrophic datacenter outages, without any changes of the application logic.
Zone-redundant configuration for the General Purpose tier has two layers:
A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using
ZRS the data and log files are synchronously copied across three physically isolated Azure availability zones.
A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data,
such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in
memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of
the node, and performs failover to another node if necessary. For zone-redundant serverless and provisioned
General Purpose databases, nodes with spare capacity are readily available in other Availability Zones for
failover.
The zone-redundant version of the high availability architecture for the General Purpose service tier is illustrated
by the following diagram:

IMPORTANT
For General Purpose tier the zone-redundant configuration is Generally Available in the following regions: West Europe,
North Europe, West US 2, and France Central. This is in preview in the following regions: East US, East US 2, Southeast
Asia, Australia East, Japan East, and UK South.

NOTE
Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available
when the Gen5 hardware is selected.

Premium and Business Critical service tier locally redundant


availability
Premium and Business Critical service tiers use the Premium availability model, which integrates compute
resources ( sqlservr.exe process) and storage (locally attached SSD) on a single node. High availability is
achieved by replicating both compute and storage to additional nodes creating a three to four-node cluster.
The underlying database files (.mdf/.ldf) are placed on the attached SSD storage to provide very low latency IO
to your workload. High availability is implemented using a technology similar to SQL Server Always On
availability groups. The cluster includes a single primary replica that is accessible for read-write customer
workloads, and up to three secondary replicas (compute and storage) containing copies of data. The primary
node constantly pushes changes to the secondary nodes in order and ensures that the data is persisted to at
least one secondary replica before committing each transaction. This process guarantees that if the primary
node crashes for any reason, there is always a fully synchronized node to fail over to. The failover is initiated by
the Azure Service Fabric. Once the secondary replica becomes the new primary node, another secondary replica
is created to ensure the cluster has enough nodes (quorum set). Once failover is complete, Azure SQL
connections are automatically redirected to the new primary node.
As an extra benefit, the premium availability model includes the ability to redirect read-only Azure SQL
connections to one of the secondary replicas. This feature is called Read Scale-Out. It provides 100% additional
compute capacity at no extra charge to off-load read-only operations, such as analytical workloads, from the
primary replica.

Premium and Business Critical service tier zone redundant availability


By default, the cluster of nodes for the premium availability model is created in the same datacenter. With the
introduction of Azure Availability Zones, SQL Database can place different replicas of the Business Critical
database to different availability zones in the same region. To eliminate a single point of failure, the control ring
is also duplicated across multiple zones as three gateway rings (GW). The routing to a specific gateway ring is
controlled by Azure Traffic Manager (ATM). Because the zone-redundant configuration in the Premium or
Business Critical service tiers does not create additional database redundancy, you can enable it at no extra cost.
By selecting a zone-redundant configuration, you can make your Premium or Business Critical databases
resilient to a much larger set of failures, including catastrophic datacenter outages, without any changes to the
application logic. You can also convert any existing Premium or Business Critical databases or pools to the zone-
redundant configuration.
Because the zone-redundant databases have replicas in different datacenters with some distance between them,
the increased network latency may increase the commit time and thus impact the performance of some OLTP
workloads. You can always return to the single-zone configuration by disabling the zone-redundancy setting.
This process is an online operation similar to the regular service tier upgrade. At the end of the process, the
database or pool is migrated from a zone-redundant ring to a single zone ring or vice versa.
IMPORTANT
This feature is not available in SQL Managed Instance. In SQL Database, when using the Business Critical tier, zone-
redundant configuration is only available when the Gen5 hardware is selected. For up to date information about the
regions that support zone-redundant databases, see Services support by region.

The zone-redundant version of the high availability architecture is illustrated by the following diagram:

Hyperscale service tier locally redundant availability


The Hyperscale service tier architecture is described in Distributed functions architecture and is only currently
available for SQL Database, not SQL Managed Instance.
The availability model in Hyperscale includes four layers:
A stateless compute layer that runs the sqlservr.exe processes and contains only transient and cached data,
such as non-covering RBPEX cache, TempDB, model database, etc. on the attached SSD, and plan cache, buffer
pool, and columnstore pool in memory. This stateless layer includes the primary compute replica and
optionally a number of secondary compute replicas that can serve as failover targets.
A stateless storage layer formed by page servers. This layer is the distributed storage engine for the
sqlservr.exe processes running on the compute replicas. Each page server contains only transient and
cached data, such as covering RBPEX cache on the attached SSD, and data pages cached in memory. Each
page server has a paired page server in an active-active configuration to provide load balancing, redundancy,
and high availability.
A stateful transaction log storage layer formed by the compute node running the Log service process, the
transaction log landing zone, and transaction log long-term storage. Landing zone and long-term storage
use Azure Storage, which provides availability and redundancy for transaction log, ensuring data durability
for committed transactions.
A stateful data storage layer with the database files (.mdf/.ndf) that are stored in Azure Storage and are
updated by page servers. This layer uses data availability and redundancy features of Azure Storage. It
guarantees that every page in a data file will be preserved even if processes in other layers of Hyperscale
architecture crash, or if compute nodes fail.
Compute nodes in all Hyperscale layers run on Azure Service Fabric, which controls health of each node and
performs failovers to available healthy nodes as necessary.
For more information on high availability in Hyperscale, see Database High Availability in Hyperscale.

Hyperscale service tier zone redundant availability (Preview)


Zone redundancy for the Azure SQL Database Hyperscale service tier is now in public preview. Enabling this
configuration ensures zone-level resiliency through replication across Availability Zones for all Hyperscale
layers. By selecting zone-redundancy, you can make your Hyperscale databases resilient to a much larger set of
failures, including catastrophic datacenter outages, without any changes to the application logic.
Consider the following limitations:
Zone redundant configuration can only be specified during database creation. This setting cannot be
modified once the resource is provisioned. Use Database copy, point-in-time restore, or create a geo-replica
to update the zone redundant configuration for an existing Hyperscale database. When using one of these
update options, if the target database is in a different region than the source or if the database backup
storage redundancy from the target differs from the source database, the copy operation will be a size of
data operation.
Only Gen5 hardware is supported.
Named replicas are not currently supported.
Only zone-redundant backup is currently supported.
Geo-Restore is not currently supported.
Zone redundancy cannot currently be specified when migrating an existing database from another Azure
SQL Database service tier to Hyperscale.
Preview is not yet available in US Gov Virginia & China North 3 regions. All others Azure regions that have
Availability Zones support zone redundant Hyperscale database.

IMPORTANT
At least 1 high availability compute replica and the use of zone-redundant backup storage is required for enabling the
zone redundant configuration for Hyperscale.

Create a zone redundant Hyperscale database


Use Azure PowerShell or the Azure CLI to create a zone redundant Hyperscale database. Confirm you have the
latest version of the API to ensure support for any recent changes.

Azure PowerShell
Azure CLI

Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database by using Azure
PowerShell. The database must have at least 1 high availability replica and zone-redundant backup storage must
be specified.
To enable zone redundancy using Azure Powershell, use the following example command:

New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database01" `


-Edition "Hyperscale" -HighAvailabilityReplicaCount 1 -ZoneRedundant -BackupStorageRedundancy Zone

Create a zone redundant Hyperscale database by creating a geo -replica


To make an existing Hyperscale database zone redundant, use Azure PowerShell or the Azure CLI to create a
zone redundant Hyperscale database using active geo-replication. The geo-replica can be in the same or
different region as the existing Hyperscale database.

Azure PowerShell
Azure CLI

Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database secondary. The
secondary database must have at least 1 high availability replica and zone-redundant backup storage must be
specified.
To create your zone redundant database using Azure PowerShell, use the following example command:

New-AzSqlDatabaseSecondary -ResourceGroupName "myResourceGroup" -ServerName $sourceserver -DatabaseName


"databaseName" -PartnerResourceGroupName "myPartnerResourceGroup" -PartnerServerName $targetserver -
PartnerDatabaseName "zoneRedundantCopyOfMySampleDatabase” -ZoneRedundant -BackupStorageRedundancy Zone -
HighAvailabilityReplicaCount 1

Create a zone redundant Hyperscale database by creating a database copy


To make an existing Hyperscale database zone redundant, use Azure PowerShell or the Azure CLI to create a
zone redundant Hyperscale database using database copy. The database copy can be in the same or different
region as the existing Hyperscale database.

Azure PowerShell
Azure CLI

Specify the -ZoneRedundant parameter to enable zone redundancy for your Hyperscale database copy. The
database copy must have at least 1 high availability replica and zone-redundant backup storage must be
specified.
To create your zone redundant database using Azure PowerShell, use the following example command:

New-AzSqlDatabaseCopy -ResourceGroupName "myResourceGroup" -ServerName $sourceserver -DatabaseName


"databaseName" -CopyResourceGroupName "myCopyResourceGroup" -CopyServerName $copyserver -CopyDatabaseName
"zoneRedundantCopyOfMySampleDatabase” -ZoneRedundant -BackupStorageRedundancy Zone

Master database zone redundant availability


In Azure SQL Database, a server is a logical construct that acts as a central administrative point for a collection
of databases. At the server level, you can administer logins, Azure Active Directory authentication, firewall rules,
auditing rules, threat detection policies, and auto-failover groups. Data related to some of these features, such as
logins and firewall rules, is stored in the master database. Similarly, data for some DMVs, for example
sys.resource_stats, is also stored in the master database.
When a database with a zone-redundant configuration is created on a logical server, the master database
associated with the server is automatically made zone-redundant as well. This ensures that in a zonal outage,
applications using the database remain unaffected because features dependent on the master database, such as
logins and firewall rules, are still available. Making the master database zone-redundant is an asynchronous
process and will take some time to finish in the background.
When none of the databases on a server are zone-redundant, or when you create an empty server, then the
master database associated with the server is not zone-redundant .
You can use Azure PowerShell or the Azure CLI or the REST API to check the ZoneRedundant property for the
master database:
Azure PowerShell
Azure CLI

Use the following example command to check the value of "ZoneRedundant" property for master database.

Get-AzSqlDatabase -ResourceGroupName "myResourceGroup" -ServerName "myServerName" -DatabaseName "master"

Accelerated Database Recovery (ADR)


Accelerated Database Recovery(ADR) is a new database engine feature that greatly improves database
availability, especially in the presence of long running transactions. ADR is currently available for Azure SQL
Database, Azure SQL Managed Instance, and Azure Synapse Analytics.

Testing application fault resiliency


High availability is a fundamental part of the SQL Database and SQL Managed Instance platform that works
transparently for your database application. However, we recognize that you may want to test how the
automatic failover operations initiated during planned or unplanned events would impact an application before
you deploy it to production. You can manually trigger a failover by calling a special API to restart a database, an
elastic pool, or a managed instance. In the case of a zone-redundant serverless or provisioned General Purpose
database or elastic pool, the API call would result in redirecting client connections to the new primary in an
Availability Zone different from the Availability Zone of the old primary. So in addition to testing how failover
impacts existing database sessions, you can also verify if it changes the end-to-end performance due to changes
in network latency. Because the restart operation is intrusive and a large number of them could stress the
platform, only one failover call is allowed every 15 minutes for each database, elastic pool, or managed instance.
A failover can be initiated using PowerShell, REST API, or Azure CLI:

DEP LO Y M EN T T Y P E P O W ERSH EL L REST A P I A Z URE C L I

Database Invoke- Database failover az rest may be used to


AzSqlDatabaseFailover invoke a REST API call from
Azure CLI

Elastic pool Invoke- Elastic pool failover az rest may be used to


AzSqlElasticPoolFailover invoke a REST API call from
Azure CLI

Managed Instance Invoke- Managed Instances - az sql mi failover may be


AzSqlInstanceFailover Failover used to invoke a REST API
call from Azure CLI

IMPORTANT
The Failover command is not available for readable secondary replicas of Hyperscale databases.

Conclusion
Azure SQL Database and Azure SQL Managed Instance feature a built-in high availability solution, that is deeply
integrated with the Azure platform. It is dependent on Service Fabric for failure detection and recovery, on Azure
Blob storage for data protection, and on Availability Zones for higher fault tolerance (as mentioned earlier in
document not applicable to Azure SQL Managed Instance yet). In addition, SQL Database and SQL Managed
Instance use the Always On availability group technology from the SQL Server instance for replication and
failover. The combination of these technologies enables applications to fully realize the benefits of a mixed
storage model and support the most demanding SLAs.

Next steps
Learn about Azure Availability Zones
Learn about Service Fabric
Learn about Azure Traffic Manager
Learn How to initiate a manual failover on SQL Managed Instance
For more options for high availability and disaster recovery, see Business Continuity
Automated backups - Azure SQL Database & Azure
SQL Managed Instance
7/12/2022 • 44 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance

NOTE
This article provides steps about how to delete personal data from the device or service and can be used to support your
obligations under the GDPR. For general information about GDPR, see the GDPR section of the Microsoft Trust Center
and the GDPR section of the Service Trust portal.

What is a database backup?


Database backups are an essential part of any business continuity and disaster recovery strategy, because they
protect your data from corruption or deletion. These backups enable database restore to a point in time within
the configured retention period. If your data protection rules require that your backups are available for an
extended time (up to 10 years), you can configure long-term retention for both single and pooled databases.

Backup and restore essentials


Databases in Azure SQL Managed Instance and non-Hyperscale databases in Azure SQL Database use SQL
Server engine technology to back up and restore data. Hyperscale databases have a unique architecture and
leverage a different technology for backup and restore. To learn more, see Hyperscale backups and storage
redundancy.
Backup frequency
Both Azure SQL Database and SQL Managed Instance use SQL Server technology to create full backups every
week, differential backups every 12-24 hours, and transaction log backups every 10 minutes. The frequency of
transaction log backups is based on the compute size and the amount of database activity.
When you restore a database, the service determines which full, differential, and transaction log backups need
to be restored.
Hyperscale databases use snapshot backup technology.
Backup storage redundancy
By default, Azure SQL Database and Azure SQL Managed Instance store data in geo-redundant storage blobs
that are replicated to a paired region. Geo-redundancy helps to protect against outages impacting backup
storage in the primary region and allows you to restore your server to a different region in the event of a
disaster.
To ensure that your data stays within the same region where your database or managed instance is deployed,
you can change the default geo-redundant backup storage redundancy. The storage redundancy mechanism
stores multiple copies of your data so that it is protected from planned and unplanned events, including
transient hardware failure, network or power outages, or massive natural disasters. The configured backup
storage redundancy is applied to both short-term backup retention settings that are used for point in time
restore (PITR) and long-term retention backups used for long-term backups (LTR).
Backup storage redundancy can be configured when you create your database or instance, and can be updated
at a later time; the changes made to an existing database apply to future backups only. After the backup storage
redundancy of an existing database is updated, it may take up to 48 hours for the changes to be applied. Geo-
restore is disabled as soon as a database is updated to use local or zone redundant storage. For Hyperscale
databases, the selected storage redundancy option will be used for the lifetime of the database for both data
storage redundancy and backup storage redundancy. Learn more in Hyperscale backups and storage
redundancy.
The option to configure backup storage redundancy provides flexibility to choose one of the following storage
redundancies for backups:
Locally-redundant (LRS) : Copies your backups synchronously three times within a single physical location
in the primary region. LRS is the least expensive replication option, but isn't recommended for applications
requiring high availability or durability.
Zone-redundant (ZRS) : Copies your backups synchronously across three Azure availability zones in the
primary region.
Geo-redundant (GRS) : Copies your backups synchronously three times within a single physical location in
the primary region using LRS, then copies your data asynchronously to a single physical location in the
paired secondary region.
Geo-zone-redundant (GZRS) : (Azure SQL Managed Instance only) Combines the high availability
provided by redundancy across availability zones with protection from regional outages provided by geo-
replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary
region and is also replicated to a secondary geographic region for protection from regional disasters.
To learn more about storage redundancy, see Data redundancy.

IMPORTANT
Zone-redundant storage is currently only available in certain regions.

Backup usage
You can use these backups to:
Point-in-time restore of existing database - Restore an existing database to a point in time in the past
within the retention period by using the Azure portal, Azure PowerShell, Azure CLI, or REST API. For SQL
Database, this operation creates a new database on the same server as the original database, but uses a
different name to avoid overwriting the original database. After restore completes, you can delete the
original database. Alternatively, you can rename both the original database, and then rename the restored
database to the original database name. Similarly, for SQL Managed Instance, this operation creates a copy of
the database on the same or different managed instance in the same subscription and same region.
Point-in-time restore of deleted database - Restore a deleted database to the time of deletion or to any
point in time within the retention period. The deleted database can be restored only on the same server or
managed instance where the original database was created. When deleting a database, the service takes a
final transaction log backup before deletion, to prevent any data loss.
Geo-restore - Restore a database to another geographic region. Geo-restore allows you to recover from a
geographic disaster when you cannot access your database or backups in the primary region. It creates a
new database on any existing server or managed instance, in any Azure region.

IMPORTANT
Geo-restore is available only for databases in Azure SQL Database or SQL Managed Instances configured with
geo-redundant backup storage. If you are not currently using geo-replicated backups for a database, you can
change this by configuring backup storage redundancy.
Restore from long-term backup - Restore a database from a specific long-term backup of a single
database or pooled database, if the database has been configured with a long-term retention policy (LTR).
LTR allows you to restore an old version of the database by using the Azure portal, Azure CLI, or Azure
PowerShell to satisfy a compliance request or to run an old version of the application. For more information,
see Long-term retention.

NOTE
In Azure Storage, the term replication refers to copying blobs from one location to another. In SQL, database replication
refers to various technologies used to keep multiple secondary databases synchronized with a primary database.

Restore capabilities and features of Azure SQL Database and Azure SQL Managed Instance
This table summarizes the capabilities and features of point in time restore (PITR), geo-restore, and long-term
retention backups.

P O IN T IN T IM E REC O VERY LO N G- T ERM B A C K UP


B A C K UP P RO P ERT IES ( P IT R) GEO - RESTO RE RESTO RE

Types of SQL backup Full, Differential, Log Replicated copies of PITR Only the full backups
backups

Recover y Point 10 minutes, based on Up to 1 hour, based on One week (or user's
Objective (RPO) compute size and amount geo-replication.* policy).
of database activity.

Recover y Time Objective Restore usually takes <12 Restore usually takes <12 Restore usually takes <12
(RTO) hours, but could take hours, but could take hours, but could take
longer dependent on size longer dependent on size longer dependent on size
and activity. See Recovery. and activity. See Recovery. and activity. See Recovery.

Retention 7 days by default, Up to 35 Enabled by default, same Not enabled by default,


days as source.** Retention Up to 10 years.

Azure storage Geo-redundant by default. Available when PITR backup Geo-redundant by default.
Can optionally configure storage redundancy is set Can configure zone or
zone or locally redundant to Geo-redundant. Not locally redundant storage.
storage. available when PITR backup
store is zone or locally
redundant storage.

Use to create new Supported Supported Supported


database in same region

Use to create new Not Supported Supported in any Azure Supported in any Azure
database in another region region
region

Use to create new Not Supported Not Supported*** Not Supported***


database in another
Subscription

Restore via Azure por tal Yes Yes Yes

Restore via PowerShell Yes Yes Yes


P O IN T IN T IM E REC O VERY LO N G- T ERM B A C K UP
B A C K UP P RO P ERT IES ( P IT R) GEO - RESTO RE RESTO RE

Restore via Azure CLI Yes Yes Yes

* For business-critical applications that require large databases and must ensure business continuity, use Auto-
failover groups.
** All PITR backups are stored on geo-redundant storage by default. Hence, geo-restore is enabled by default.
*** Workaround is to restore to a new server and use Resource Move to move the server to another
Subscription.
Restoring a database from backups
To perform a restore, see Restore database from backups. You can try backup configuration and restore
operations using the following examples:

O P ERAT IO N A Z URE P O RTA L A Z URE C L I A Z URE P O W ERSH EL L

Change backup SQL Database SQL Database SQL Database


retention SQL Managed Instance SQL Managed Instance SQL Managed Instance

Change long-term SQL Database SQL Database SQL Database


backup retention SQL Managed Instance SQL Managed Instance SQL Managed Instance

Restore a database from SQL Database SQL Database SQL Database


a point in time SQL Managed Instance SQL Managed Instance SQL Managed Instance

Restore a deleted SQL Database SQL Database SQL Database


database SQL Managed Instance SQL Managed Instance SQL Managed Instance

Restore a database from


Azure Blob storage SQL Managed Instance

Backup scheduling
The first full backup is scheduled immediately after a new database is created or restored. This backup usually
completes within 30 minutes, but it can take longer when the database is large. For example, the initial backup
can take longer on a restored database or a database copy, which would typically be larger than a new database.
After the first full backup, all further backups are scheduled and managed automatically. The exact timing of all
database backups is determined by the SQL Database or SQL Managed Instance service as it balances the
overall system workload. You cannot change the schedule of backup jobs or disable them. Hyperscale uses a
different backup scheduling mechanism, refer to Hyperscale backup scheduling for more details.

IMPORTANT
For a new, restored, or copied database, point-in-time restore capability becomes available from the time when the initial
transaction log backup that follows the initial full backup is created.

Backup storage consumption


With SQL Server backup and restore technology, restoring a database to a point in time requires an
uninterrupted backup chain consisting of one full backup, optionally one differential backup, and one or more
transaction log backups. Azure SQL Database and Azure SQL Managed Instance backup schedules include one
full backup every week. Therefore, to provide PITR within the entire retention period, the system must store
additional full, differential, and transaction log backups for up to a week longer than the configured retention
period.
In other words, for any point in time during the retention period, there must be a full backup that is older than
the oldest time of the retention period, as well as an uninterrupted chain of differential and transaction log
backups from that full backup until the next full backup. Hyperscale databases use a different backup scheduling
mechanism, for more details see Hyperscale backup scheduling and for more details on how to monitor storage
costs see Hyperscale backup storage costs.

NOTE
To provide PITR, additional backups are stored for up to a week longer than the configured retention period. Backup
storage is charged at the same rate for all backups.

Backups that are no longer needed to provide PITR functionality are automatically deleted. Because differential
backups and log backups require an earlier full backup to be restorable, all three backup types are purged
together in weekly sets.
For all databases including TDE encrypted databases, backups are compressed to reduce backup storage
compression and costs. Average backup compression ratio is 3-4 times, however it can be significantly lower or
higher depending on the nature of the data and whether data compression is used in the database.
Azure SQL Database and Azure SQL Managed Instance compute your total used backup storage as a cumulative
value. Every hour, this value is reported to the Azure billing pipeline, which is responsible for aggregating this
hourly usage to calculate your consumption at the end of each month. After the database is deleted,
consumption decreases as backups age out and are deleted. Once all backups are deleted and PITR is no longer
possible, billing stops.

IMPORTANT
Backups of a database are retained to provide PITR even if the database has been deleted. While deleting and re-creating
a database may save storage and compute costs, it may increase backup storage costs, because the service retains
backups for each deleted database, every time it is deleted.

Monitor consumption
For vCore databases in Azure SQL Database, the storage consumed by each type of backup (full, differential, and
log) is reported on the database monitoring pane as a separate metric. The following diagram shows how to
monitor the backup storage consumption for a single database. This feature is currently not available for
managed instances.
Instructions on how to monitor consumption in Hyperscale can be found in Hyperscale monitor backup
consumption
Fine -tune backup storage consumption
Backup storage consumption up to the maximum data size for a database is not charged. Excess backup storage
consumption will depend on the workload and maximum size of the individual databases. Consider some of the
following tuning techniques to reduce your backup storage consumption:
Reduce the backup retention period to the minimum possible for your needs.
Avoid doing large write operations, like index rebuilds, more frequently than you need to.
For large data load operations, consider using clustered columnstore indexes and following related best
practices, and/or reduce the number of non-clustered indexes.
In the General Purpose service tier, the provisioned data storage is less expensive than the price of the
backup storage. If you have continually high excess backup storage costs, you might consider increasing data
storage to save on the backup storage.
Use TempDB instead of permanent tables in your application logic for storing temporary results and/or
transient data.
Use locally redundant backup storage whenever possible (for example dev/test environments)

Backup retention
Azure SQL Database and Azure SQL Managed Instance provide both short-term and long-term retention of
backups. Short-term retention backups allow Point-In-Time-Restore (PITR) within the retention period for the
database, while long-term retention provides backups for various compliance requirements.
Short-term retention
For all new, restored, and copied databases, Azure SQL Database and Azure SQL Managed Instance retain
sufficient backups to allow PITR within the last seven days by default. Regular full, differential and log backups
are taken to ensure databases are restorable to any point-in-time within the retention period defined for the
database or managed instance. Short-term back up retention of 1-35 days for Hyperscale databases is now in
Preview. To learn more, review Managing backup retention in Hyperscale.
For Azure SQL Database only, differential backups can be configured to either a 12-hour or a 24-hour frequency.
A 24-hour differential backup frequency may increase the time required to restore the database.
You can specify your backup storage redundancy option for short-term retention (STR) when you create your
Azure SQL resource, and then change it at a later time. If you change your backup redundancy option after your
database or instance is created, new backups will use the new redundancy option while backup copies made
with the previous STR redundancy option are not moved or copied, but are left in the original storage account
until the retention period expires, which can be 7-35 days.
Except for Basic tier databases, you can change backup retention period per each active database in the 1-35 day
range. As described in Backup storage consumption, backups stored to enable PITR may be older than the
retention period. For Azure SQL Managed Instance only, it is possible to set the PITR backup retention rate once
a database has been deleted in the 0-35 days range. If you need to keep backups for longer than the maximum
short-term retention period of 35 days, you can enable Long-term retention.
If you delete a database, the system keeps backups in the same way it would for an online database with its
specific retention period. You cannot change backup retention period for a deleted database.

IMPORTANT
If you delete a server or a managed instance, all databases on that server or managed instance are also deleted and
cannot be recovered. You cannot restore a deleted server or managed instance. But if you had configured long-term
retention (LTR) for a database or managed instance, long-term retention backups are not deleted, and can be used to
restore databases on a different server or managed instance in the same subscription, to a point in time when a long-
term retention backup was taken.

Long-term retention
For both SQL Database and SQL Managed Instance, you can configure full backup long-term retention (LTR) for
up to 10 years in Azure Blob storage. After the LTR policy is configured, full backups are automatically copied to
a different storage container weekly. To meet various compliance requirements, you can select different
retention periods for weekly, monthly, and/or yearly full backups. The frequency depends on the policy. For
example, setting W=0, M=1 would create an LTR copy monthly. For more information about LTR, see Long-term
backup retention.
Storage redundancy for long-term retention can be changed after the LTR policy is created for Azure SQL
Database, but not Azure SQL Managed Instance.
Storage consumption depends on the selected frequency and retention periods of LTR backups. You can use the
LTR pricing calculator to estimate the cost of LTR storage.

IMPORTANT
Updating the backup storage redundancy for an existing Azure SQL Database only applies the change to subsequent
backups taken in the future for the database. All existing LTR backups for the database will continue to reside in the
existing storage blob and new backups will be stored on the newly requested storage blob type.
LTR backup storage redundancy in Azure SQL Managed Instance is inherited from the backup storage redundancy
used by STR at the time the LTR policy is defined and cannot be changed subsequently, even if the STR backup storage
redundancy is changed in the future.
Databases in the Hyperscale service tier for Azure SQL Database do not currently support long-term retention.

Backup storage costs


The price for backup storage varies and depends on your purchasing model (DTU or vCore), chosen backup
storage redundancy option, and also on your region. The backup storage is charged per GB/month consumed,
for pricing see Azure SQL Database pricing page and Azure SQL Managed Instance pricing page.
For more on purchasing models, see Choose between the vCore and DTU purchasing models.
NOTE
Azure invoice will show only the excess backup storage consumed, not the entire backup storage consumption. For
example, in a hypothetical scenario, if you have provisioned 4TB of data storage, you will get 4TB of free backup storage
space. In case that you have used the total of 5.8TB of backup storage space, Azure invoice will show only 1.8TB, as only
excess backup storage used is charged.

DTU model
In the DTU model, there's no additional charge for backup storage for databases and elastic pools. The price of
backup storage is a part of database or pool price.
vCore model
For single databases in SQL Database, a backup storage amount equal to 100 percent of the maximum data
storage size for the database is provided at no extra charge. For elastic pools and managed instances, a backup
storage amount equal to 100 percent of the maximum data storage for the pool or the maximum instance
storage size, respectively, is provided at no extra charge.
For single databases, this equation is used to calculate the total billable backup storage usage:
Total billable backup storage size = (size of full backups + size of differential backups + size of log
backups) – maximum data storage

For pooled databases, the total billable backup storage size is aggregated at the pool level and is calculated as
follows:
Total billable backup storage size = (total size of all full backups + total size of all differential backups
+ total size of all log backups) - maximum pool data storage

For managed instances, the total billable backup storage size is aggregated at the instance level and is calculated
as follows:
Total billable backup storage size = (total size of full backups + total size of differential backups + total
size of log backups) – maximum instance data storage

Total billable backup storage, if any, will be charged in GB/month as per the rate of the backup storage
redundancy used. This backup storage consumption will depend on the workload and size of individual
databases, elastic pools, and managed instances. Heavily modified databases have larger differential and log
backups, because the size of these backups is proportional to the amount of changed data. Therefore, such
databases will have higher backup charges.
Formulae used to calculate backup storage costs for Hyperscale databases can be found in Hyperscale backup
storage costs.
Azure SQL Database and Azure SQL Managed Instance compute your total billable backup storage as a
cumulative value across all backup files. Every hour, this value is reported to the Azure billing pipeline, which
aggregates this hourly usage to get your backup storage consumption at the end of each month. If a database is
deleted, backup storage consumption will gradually decrease as older backups age out and are deleted. Because
differential backups and log backups require an earlier full backup to be restorable, all three backup types are
purged together in weekly sets. Once all backups are deleted, billing stops.
As a simplified example, assume a database has accumulated 744 GB of backup storage and that this amount
stays constant throughout an entire month because the database is completely idle. To convert this cumulative
storage consumption to hourly usage, divide it by 744.0 (31 days per month * 24 hours per day). SQL Database
will report to Azure billing pipeline that the database consumed 1 GB of PITR backup each hour, at a constant
rate. Azure billing will aggregate this consumption and show a usage of 744 GB for the entire month. The cost
will be based on the amount/GB/month rate in your region.
Now, a more complex example. Suppose the same idle database has its retention increased from seven days to
14 days in the middle of the month. This increase results in the total backup storage doubling to 1,488 GB. SQL
Database would report 1 GB of usage for hours 1 through 372 (the first half of the month). It would report the
usage as 2 GB for hours 373 through 744 (the second half of the month). This usage would be aggregated to a
final bill of 1,116 GB/month.
Actual backup billing scenarios are more complex. Because the rate of changes in the database depends on the
workload and is variable over time, the size of each differential and log backup will vary as well, causing the
hourly backup storage consumption to fluctuate accordingly. Furthermore, each differential backup contains all
changes made in the database since the last full backup, thus the total size of all differential backups gradually
increases over the course of a week, and then drops sharply once an older set of full, differential, and log
backups ages out. For example, if a heavy write activity such as index rebuild has been run just after a full
backup completed, then the modifications made by the index rebuild will be included in the transaction log
backups taken over the duration of rebuild, in the next differential backup, and in every differential backup taken
until the next full backup occurs. For the latter scenario in larger databases, an optimization in the service
creates a full backup instead of a differential backup if a differential backup would be excessively large
otherwise. This reduces the size of all differential backups until the following full backup.
You can monitor total backup storage consumption for each backup type (full, differential, transaction log) over
time as described in Monitor consumption.
Backup storage redundancy
Backup storage redundancy impacts backup costs in the following way:
locally redundant price = x
zone-redundant price = 1.25x
geo-redundant price = 2x
geo-zone redundant price = 3.4x
For more details about backup storage pricing visit Azure SQL Database pricing page and Azure SQL Managed
Instance pricing page.

IMPORTANT
Backup storage redundancy for Hyperscale can only be set during database creation. This setting cannot be modified once
the resource is provisioned. Database copy process can be used to update the backup storage redundancy settings for an
existing Hyperscale database. Learn more in Hyperscale backups and storage redundancy.

Monitor costs
To understand backup storage costs, go to Cost Management + Billing in the Azure portal, select Cost
Management , and then select Cost analysis . Select the desired subscription as the Scope , and then filter for
the time period and service that you're interested in as follows:
1. Add a filter for Ser vice name .
2. In the drop-down list select sql database for a single database or an elastic database pool, or select sql
managed instance for managed instance.
3. Add another filter for Meter subcategor y .
4. To monitor PITR backup costs, in the drop-down list select single/elastic pool pitr backup storage for a
single database or an elastic database pool, or select managed instance pitr backup storage for
managed instance. Meters will only show up if there exists consumption.
5. To monitor LTR backup costs, in the drop-down list select ltr backup storage for a single database or an
elastic database pool, or select sql managed instance - ltr backup storage for managed instance. Meters
will only show up if there exists consumption.
The Storage and compute subcategories might interest you as well, but they're not associated with backup
storage costs.

IMPORTANT
Meters are only visible for counters that are currently in use. If a counter is not available, it is likely that the category is
not currently being used. For example, managed instance counters will not be present for customers who do not have a
managed instance deployed. Likewise, storage counters will not be visible for resources that are not consuming storage.
For example, if there is no PITR or LTR backup storage consumption, these meters won't be shown.

For more information, see Azure SQL Database cost management.

Encrypted backups
If your database is encrypted with TDE, backups are automatically encrypted at rest, including LTR backups. All
new databases in Azure SQL are configured with TDE enabled by default. For more information on TDE, see
Transparent Data Encryption with SQL Database & SQL Managed Instance.

Backup integrity
On an ongoing basis, the Azure SQL engineering team automatically tests the restore of automated database
backups. (This testing is not currently available in SQL Managed Instance. You should schedule DBCC CHECKDB
on your databases in SQL Managed Instance, scheduled around on your workload.)
Upon point-in-time restore, databases also receive DBCC CHECKDB integrity checks.
Any issues found during the integrity check will result in an alert to the engineering team. For more information,
see Data Integrity in SQL Database.
All database backups are taken with the CHECKSUM option to provide additional backup integrity.

Compliance
When you migrate your database from a DTU-based service tier to a vCore-based service tier, the PITR retention
is preserved to ensure that your application's data recovery policy isn't compromised. If the default retention
doesn't meet your compliance requirements, you can change the PITR retention period. For more information,
see Change the PITR backup retention period.
NOTE
This article provides steps about how to delete personal data from the device or service and can be used to support your
obligations under the GDPR. For general information about GDPR, see the GDPR section of the Microsoft Trust Center
and the GDPR section of the Service Trust portal.

Change the short-term retention policy


You can change the default PITR backup retention period and the differential backup frequency by using the
Azure portal, PowerShell, or the REST API. The following examples illustrate how to change the PITR retention to
28 days and the differential backups to 24 hour interval.

WARNING
If you reduce the current retention period, you lose the ability to restore to points in time older than the new retention
period. Backups that are no longer needed to provide PITR within the new retention period are deleted. If you increase
the current retention period, you do not immediately gain the ability to restore to older points in time within the new
retention period. You gain that ability over time, as the system starts to retain backups for longer.

NOTE
These APIs will affect only the PITR retention period. If you configured LTR for your database, it won't be affected. For
information about how to change LTR retention periods, see Long-term retention.

Change the short-term retention policy using the Azure portal


To change the PITR backup retention period or the differential backup frequency for active databases by using
the Azure portal, go to the server or managed instance with the databases whose retention period you want to
change. Select Backups in the left pane, then select the Retention policies tab. Select the database(s) for
which you want to change the PITR backup retention. Then select Configure retention from the action bar.
SQL Database
SQL Managed Instance
Change the short-term retention policy using Azure CLI
Prepare your environment for the Azure CLI.
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

SQL Database
SQL Managed Instance

Change the PITR backup retention and differential backup frequency for active Azure SQL Databases by using
the following example.
# Set new PITR differential backup frequency on an active individual database
# Valid backup retention must be between 1 and 35 days
# Valid differential backup frequency must be ether 12 or 24
az sql db str-policy set \
--resource-group myresourcegroup \
--server myserver \
--name mydb \
--retention-days 28 \
--diffbackup-hours 24

Change the short-term retention policy using PowerShell

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell AzureRM module is still supported by SQL Database and SQL Managed Instance, but all future
development is for the Az.Sql module. For more information, see AzureRM.Sql. The arguments for the commands in the
Az module are substantially identical to those in the AzureRm modules.

SQL Database
SQL Managed Instance

To change the PITR backup retention and differential backup frequency for active Azure SQL Databases, use the
following PowerShell example.

# SET new PITR backup retention period on an active individual database


# Valid backup retention must be between 1 and 35 days
Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -ServerName testserver -
DatabaseName testDatabase -RetentionDays 28

# SET new PITR differential backup frequency on an active individual database


# Valid differential backup frequency must be ether 12 or 24.
Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -ServerName testserver -
DatabaseName testDatabase -RetentionDays 28 -DiffBackupIntervalInHours 24

Change the short-term retention policy using the REST API


The below request updates the retention period to 28 days and also sets the differential backup frequency to 24
hours.

SQL Database
SQL Managed Instance

Sample Request

PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-
444444444444/resourceGroups/resourceGroup/providers/Microsoft.Sql/servers/testserver/databases/testDatabase/
backupShortTermRetentionPolicies/default?api-version=2021-02-01-preview

Request Body
{
"properties":{
"retentionDays":28,
"diffBackupIntervalInHours":24
}
}

Sample Response:

{
"id": "/subscriptions/00000000-1111-2222-3333-
444444444444/providers/Microsoft.Sql/resourceGroups/resourceGroup/servers/testserver/databases/testDatabase/
backupShortTermRetentionPolicies/default",
"name": "default",
"type": "Microsoft.Sql/resourceGroups/servers/databases/backupShortTermRetentionPolicies",
"properties": {
"retentionDays": 28,
"diffBackupIntervalInHours":24
}
}

For more information, see Backup Retention REST API.

Hyperscale backups and storage redundancy


Hyperscale databases in Azure SQL Database use a unique architecture with highly scalable storage and
compute performance tiers.
Hyperscale backups are snapshot based and are nearly instantaneous. Log generated is stored in long term
Azure storage for the backup retention period. Hyperscale architecture does not use full database backups or log
backups and hence the backup frequency, storage costs, scheduling, storage and redundancy and restore
capabilities described in the previous sections of this article do not apply.
Hyperscale backup and restore performance
Storage and compute separation enables Hyperscale to push down backup and restore operation to the storage
layer to reduce the processing burden on the primary compute replica. As a result, database backups don't
impact performance of the primary compute node.
Backup and restore operations for Hyperscale databases are fast regardless of data size due to the use of
storage snapshots. A database can be restored to any point in time within its backup retention period. Point in
time recovery (PITR) is achieved by reverting to file snapshots, and as such is not a size of data operation.
Restore of a Hyperscale database within the same Azure region is a constant-time operation, and even multiple-
terabyte databases can be restored in minutes instead of hours or days. Creation of new databases by restoring
an existing backup or copying the database also takes advantage of this feature: creating database copies for
development or testing purposes, even of multi-terabyte databases, is doable in minutes within the same region
when the same storage type is used.
Hyperscale backup retention
Default short-term backup retention (STR) for Hyperscale databases is 7 days; long-term retention (LTR) policies
aren't currently supported.

NOTE
Short-term backup retention up to 35 days for Hyperscale databases is now in preview.

Hyperscale backup scheduling


There are no traditional full, differential, and transaction log backups for Hyperscale databases. Instead, regular
storage snapshots of data files are taken. The generated transaction log is retained as-is for the configured
retention period. At restore time, relevant transaction log records are applied to the restored storage snapshot,
resulting in a transactionally-consistent database without any data loss as of the specified point in time within
the retention period.
Hyperscale backup storage costs
Hyperscale backup storage cost depends on the choice of region and backup storage redundancy. It also
depends on the workload type. Write-heavy workloads are more likely to change data pages frequently, which
results in larger storage snapshots. Such workloads also generate more transaction log, contributing to the
overall backup costs. Backup storage is charged per GB/month consumed, for pricing details see the Azure SQL
Database pricing page.
For Hyperscale, billable backup storage is calculated as follows:
Total billable backup storage size = (Data backup storage size + Log backup storage size)

Data storage size is not included in the billable backup as it is already billed as allocated database storage.
Deleted Hyperscale databases incur backup costs to support recovery to a point in time before deletion. For a
deleted Hyperscale database, billable backup storage is calculated as follows:
Total billable backup storage size for deleted Hyperscale database = (Data storage size + Data backup size +
Log backup storage size) * (remaining backup retention period after deletion/configured backup retention
period)

Data storage size is included in the formula because allocated database storage is not billed separately for a
deleted database. For a deleted database, data is stored post deletion to enable recovery during the configured
backup retention period. Billable backup storage for a deleted database reduces gradually over time after it is
deleted. It becomes zero when backups are no longer retained, and recovery is no longer possible. However if it
is a permanent deletion and backups are no longer needed, to optimize costs you can reduce retention before
deleting the database.
Hyperscale monitor backup consumption
In Hyperscale, data backup storage size (snapshot backup size), data storage size(database size) and log backup
storage size(transactions log backup size) are reported via Azure Monitor metrics.
To view backup and data storage metrics in the Azure portal, follow these steps: :
1. Go to the Hyperscale database for which you'd like to monitor backup and data storage metrics.
2. Select the Metrics page in the Monitoring section.
3. From the Metric drop-down list, select the Data backup Storage and Log Backup Storage metrics with
an appropriate aggregation rule.
Reduce backup storage consumption
Backup storage consumption for a Hyperscale database depends on the retention period, choice of region,
backup storage redundancy and workload type. Consider some of the following tuning techniques to reduce
your backup storage consumption for a Hyperscale database:
Reduce the backup retention period to the minimum possible for your needs.
Avoid doing large write-operations, such as index maintenance, more frequently than you need to. For index
maintenance recommendations, see Optimize index maintenance to improve query performance and reduce
resource consumption.
For large data-load operations, consider using data compression when appropriate.
Use the tempdb database instead of permanent tables in your application logic to store temporary results
and/or transient data.
Use locally-redundant or zone-redundant backup storage when geo-restore capability is unnecessary (for
example: dev/test environments).
Hyperscale storage redundancy applies to both data storage and backup storage
Hyperscale supports configurable storage redundancy. When creating a Hyperscale database, you can choose
your preferred storage type: read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS), or
locally redundant storage (LRS) Azure standard storage. The selected storage redundancy option is used for the
lifetime of the database for both data storage redundancy and backup storage redundancy.
Consider storage redundancy carefully when you create a Hyperscale database
Backup storage redundancy for Hyperscale databases can only be set during database creation. This setting
cannot be modified once the resource is provisioned. Geo-restore is only available when geo-redundant storage
(RA-GRS) has been chosen for backup storage redundancy. The database copy process can be used to update
the storage redundancy settings for an existing Hyperscale database. Copying a database to a different storage
type will be a size-of-data operation. Find example code in configure backup storage redundancy.

IMPORTANT
Zone-redundant storage is currently only available in certain regions.
Restoring a Hyperscale database to a different region
If you need to restore a Hyperscale database in Azure SQL Database to a region other than the one it's currently
hosted in, as part of a disaster recovery operation or drill, relocation, or any other reason, the primary method is
to do a geo-restore of the database. This involves exactly the same steps as what you would use to restore any
other database in SQL Database to a different region:
1. Create a server in the target region if you don't already have an appropriate server there. This server should
be owned by the same subscription as the original (source) server.
2. Follow the instructions in the geo-restore section of the page on restoring a database in Azure SQL Database
from automatic backups.

NOTE
Because the source and target are in separate regions, the database cannot share snapshot storage with the source
database as in non-geo restores, which complete quickly regardless of database size. In the case of a geo-restore of a
Hyperscale database, it will be a size-of-data operation, even if the target is in the paired region of the geo-replicated
storage. Therefore, a geo-restore will take time proportional to the size of the database being restored. If the target is in
the paired region, data transfer will be within a region, which will be significantly faster than a cross-region data transfer,
but it will still be a size-of-data operation.

If you prefer, you can copy the database to a different region as well. Learn about Database Copy for Hyperscale.

Configure backup storage redundancy


Backup storage redundancy for databases in Azure SQL Database can be configured at the time of database
creation or can be updated for an existing database; the changes made to an existing database apply to future
backups only. The default value is geo-redundant storage. For differences in pricing between locally redundant,
zone-redundant and geo-redundant backup storage visit managed instance pricing page. Storage redundancy
for Hyperscale databases is unique: learn more in Hyperscale backups and storage redundancy.
For Azure SQL Managed Instance, backup storage redundancy is set at the instance level, and it is applied for all
belonging managed databases. It can be configured at the time of an instance creation or updated for existing
instances; the backup storage redundancy change would trigger then a new full backup per database and the
change will apply for all future backups. The default storage redundancy type is geo-redundancy (RA-GRS).
Configure backup storage redundancy by using the Azure portal
SQL Database
SQL Managed Instance

In Azure portal, you can configure the backup storage redundancy on the Create SQL Database pane. The
option is available under the Backup Storage Redundancy section.
Configure backup storage redundancy by using the Azure CLI
SQL Database
SQL Managed Instance

To configure backup storage redundancy when creating a new database, you can specify the
--backup-storage-redundancy parameter with the az sql db create command. Possible values are Geo , Zone ,
and Local . By default, all databases in Azure SQL Database use geo-redundant storage for backups. Geo-
restore is disabled if a database is created or updated with local or zone redundant backup storage.
This example creates a database in the General Purpose service tier with local backup redundancy:

az sql db create \
--resource-group myresourcegroup \
--server myserver \
--name mydb \
--tier GeneralPurpose \
--backup-storage-redundancy Local

Carefully consider the configuration option for --backup-storage-redundancy when creating a Hyperscale
database. Storage redundancy can only be specified during the database creation process for Hyperscale
databases. The selected storage redundancy option will be used for the lifetime of the database for both data
storage redundancy and backup storage redundancy. Learn more in Hyperscale backups and storage
redundancy.
Existing Hyperscale databases can migrate to different storage redundancy using database copy or point in time
restore: sample code to copy a Hyperscale database follows in this section.
This example creates a database in the Hyperscale service tier with Zone redundancy:
az sql db create \
--resource-group myresourcegroup \
--server myserver \
--name mydb \
--tier Hyperscale \
--backup-storage-redundancy Zone

For more information, see az sql db create and az sql db update.


Except for Hyperscale and Basic tier databases, you can update the backup storage redundancy setting for an
existing database with the --backup-storage-redundancy parameter and the az sql db update command. It may
take up to 48 hours for the changes to be applied on the database. Switching from geo-redundant backup
storage to local or zone redundant storage disables geo-restore.
This example code changes the backup storage redundancy to Local .

az sql db update \
--resource-group myresourcegroup \
--server myserver \
--name mydb \
--backup-storage-redundancy Local

You cannot update the backup storage redundancy of a Hyperscale database directly. However, you can change it
using the database copy command with the --backup-storage-redundancy parameter. This example copies a
Hyperscale database to a new database using Gen5 hardware and two vCores. The new database has the backup
redundancy set to Zone .

az sql db copy \
--resource-group myresourcegroup \
--server myserver
--name myHSdb
--dest-resource-group mydestresourcegroup
--dest-server destdb
--dest-name myHSdb
--service-objective HS_Gen5_2
--read-replicas 0
--backup-storage-redundancy Zone

For syntax details, see az sql db copy. For an overview of database copy, visit Copy a transactionally consistent
copy of a database in Azure SQL Database.
Configure backup storage redundancy by using PowerShell
SQL Database
SQL Managed Instance

To configure backup storage redundancy when creating a new database, you can specify the
-BackupStorageRedundancy parameter with the New-AzSqlDatabase cmdlet. Possible values are Geo , Zone , and
Local . By default, all databases in Azure SQL Database use geo-redundant storage for backups. Geo-restore is
disabled if a database is created with local or zone redundant backup storage.
This example creates a database in the General Purpose service tier with local backup redundancy:

# Create a new database with geo-redundant backup storage.


New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database03" -
Edition "GeneralPurpose" -Vcore 2 -ComputeGeneration "Gen5" -BackupStorageRedundancy Local
Carefully consider the configuration option for --backup-storage-redundancy when creating a Hyperscale
database. Storage redundancy can only be specified during the database creation process for Hyperscale
databases. The selected storage redundancy option will be used for the lifetime of the database for both data
storage redundancy and backup storage redundancy. Learn more in Hyperscale backups and storage
redundancy.
Existing databases can migrate to different storage redundancy using database copy or point in time restore:
sample code to copy a Hyperscale database follows in this section.
This example creates a database in the Hyperscale service tier with Zone redundancy:

# Create a new database with geo-redundant backup storage.


New-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "Database03" -
Edition "Hyperscale" -Vcore 2 -ComputeGeneration "Gen5" -BackupStorageRedundancy Zone

For syntax details visit New-AzSqlDatabase.


Except for Hyperscale and Basic tier databases, you can use the -BackupStorageRedundancy parameter with the
Set-AzSqlDatabase cmdlet to update the backup storage redundancy setting for an existing database. Possible
values are Geo, Zone, and Local. It may take up to 48 hours for the changes to be applied on the database.
Switching from geo-redundant backup storage to local or zone redundant storage disables geo-restore.
This example code changes the backup storage redundancy to Local .

# Change the backup storage redundancy for Database01 to zone-redundant.


Set-AzSqlDatabase -ResourceGroupName "ResourceGroup01" -DatabaseName "Database01" -ServerName "Server01" -
BackupStorageRedundancy Local

For details visit Set-AzSqlDatabase


Backup storage redundancy of an existing Hyperscale database cannot be updated. However, you can use the
database copy command to create a copy of the database and use the -BackupStorageRedundancy parameter to
update the backup storage redundancy. This example copies a Hyperscale database to a new database using
Gen5 hardware and two vCores. The new database has the backup redundancy set to Zone .

# Change the backup storage redundancy for Database01 to zone-redundant.


New-AzSqlDatabaseCopy -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -DatabaseName "HSSourceDB"
-CopyResourceGroupName "DestResourceGroup" -CopyServerName "DestServer" -CopyDatabaseName "HSDestDB" -Vcore
2 -ComputeGeneration "Gen5" -ComputeModel Provisioned -BackupStorageRedundancy Zone

For syntax details, visit New-AzSqlDatabaseCopy.


For an overview of database copy, visit Copy a transactionally consistent copy of a database in Azure SQL
Database.

NOTE
To use -BackupStorageRedundancy parameter with database restore, database copy or create secondary operations, use
Azure PowerShell version Az.Sql 2.11.0.

Use Azure Policy to enforce backup storage redundancy


If you have data residency requirements that require you to keep all your data in a single Azure region, you may
want to enforce zone-redundant or locally redundant backups for your SQL Database or Managed Instance
using Azure Policy. Azure Policy is a service that you can use to create, assign, and manage policies that apply
rules to Azure resources. Azure Policy helps you to keep these resources compliant with your corporate
standards and service level agreements. For more information, see Overview of Azure Policy.
Built-in backup storage redundancy policies
Following new built-in policies are added, which can be assigned at the subscription or resource group level to
block creation of new database(s) or instance(s) with geo-redundant backup storage.
SQL Database should avoid using GRS backup redundancy
SQL Managed Instances should avoid using GRS backup redundancy
A full list of built-in policy definitions for SQL Database and Managed Instance can be found here.
To enforce data residency requirements at an organizational level, these policies can be assigned to a
subscription. After these policies are assigned at a subscription level, users in the given subscription will not be
able to create a database or a managed instance with geo-redundant backup storage via Azure portal or Azure
PowerShell.

IMPORTANT
Azure policies are not enforced when creating a database via T-SQL. To enforce data residency when creating a database
using T-SQL, use 'LOCAL' or 'ZONE' as input to BACKUP_STORAGE_REDUNDANCY paramater in CREATE DATABASE
statement.

Learn how to assign policies using the Azure portal or Azure PowerShell

Next steps
Database backups are an essential part of any business continuity and disaster recovery strategy because
they protect your data from accidental corruption or deletion. To learn about the other SQL Database
business continuity solutions, see Business continuity overview.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob storage by using the Azure portal, see Manage long-term backup retention by using
the Azure portal.
For information about how to configure, manage, and restore from long-term retention of automated
backups in Azure Blob storage by using PowerShell, see Manage long-term backup retention by using
PowerShell.
Get more information about how to restore a database to a point in time by using the Azure portal.
Get more information about how to restore a database to a point in time by using PowerShell.
To learn all about backup storage consumption on Azure SQL Managed Instance, see Backup storage
consumption on Managed Instance explained.
To learn how to fine-tune backup storage retention and costs for Azure SQL Managed Instance, see Fine
tuning backup storage costs on Managed Instance.
Accelerated Database Recovery in Azure SQL
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Accelerated Database Recover y (ADR) is a SQL Server database engine feature that greatly improves
database availability, especially in the presence of long running transactions, by redesigning the SQL Server
database engine recovery process.
ADR is currently available for Azure SQL Database, Azure SQL Managed Instance, databases in Azure Synapse
Analytics, and SQL Server on Azure VMs starting with SQL Server 2019. For information on ADR in SQL Server,
see Manage accelerated database recovery.

NOTE
ADR is enabled by default in Azure SQL Database and Azure SQL Managed Instance. Disabling ADR in Azure SQL
Database and Azure SQL Managed Instance is not supported.

Overview
The primary benefits of ADR are:
Fast and consistent database recover y
With ADR, long running transactions do not impact the overall recovery time, enabling fast and
consistent database recovery irrespective of the number of active transactions in the system or their
sizes.
Instantaneous transaction rollback
With ADR, transaction rollback is instantaneous, irrespective of the time that the transaction has been
active or the number of updates that has performed.
Aggressive log truncation
With ADR, the transaction log is aggressively truncated, even in the presence of active long-running
transactions, which prevents it from growing out of control.

Standard database recovery process


Database recovery follows the ARIES recovery model and consists of three phases, which are illustrated in the
following diagram and explained in more detail following the diagram.
Analysis phase
Forward scan of the transaction log from the beginning of the last successful checkpoint (or the oldest
dirty page LSN) until the end, to determine the state of each transaction at the time the database stopped.
Redo phase
Forward scan of the transaction log from the oldest uncommitted transaction until the end, to bring the
database to the state it was at the time of the crash by redoing all committed operations.
Undo phase
For each transaction that was active as of the time of the crash, traverses the log backwards, undoing the
operations that this transaction performed.
Based on this design, the time it takes the SQL Server database engine to recover from an unexpected restart is
(roughly) proportional to the size of the longest active transaction in the system at the time of the crash.
Recovery requires a rollback of all incomplete transactions. The length of time required is proportional to the
work that the transaction has performed and the time it has been active. Therefore, the recovery process can
take a long time in the presence of long-running transactions (such as large bulk insert operations or index
build operations against a large table).
Also, cancelling/rolling back a large transaction based on this design can also take a long time as it is using the
same Undo recovery phase as described above.
In addition, the SQL Server database engine cannot truncate the transaction log when there are long-running
transactions because their corresponding log records are needed for the recovery and rollback processes. As a
result of this design of the SQL Server database engine, some customers used to face the problem that the size
of the transaction log grows very large and consumes huge amounts of drive space.

The Accelerated Database Recovery process


ADR addresses the above issues by completely redesigning the SQL Server database engine recovery process
to:
Make it constant time/instant by avoiding having to scan the log from/to the beginning of the oldest active
transaction. With ADR, the transaction log is only processed from the last successful checkpoint (or oldest
dirty page Log Sequence Number (LSN)). As a result, recovery time is not impacted by long running
transactions.
Minimize the required transaction log space since there is no longer a need to process the log for the whole
transaction. As a result, the transaction log can be truncated aggressively as checkpoints and backups occur.
At a high level, ADR achieves fast database recovery by versioning all physical database modifications and only
undoing logical operations, which are limited and can be undone almost instantly. Any transaction that was
active as of the time of a crash are marked as aborted and, therefore, any versions generated by these
transactions can be ignored by concurrent user queries.
The ADR recovery process has the same three phases as the current recovery process. How these phases
operate with ADR is illustrated in the following diagram and explained in more detail following the diagram.

Analysis phase
The process remains the same as before with the addition of reconstructing SLOG and copying log
records for non-versioned operations.
Redo phase
Broken into two phases (P)
Phase 1
Redo from SLOG (oldest uncommitted transaction up to last checkpoint). Redo is a fast operation
as it only needs to process a few records from the SLOG.
Phase 2
Redo from Transaction Log starts from last checkpoint (instead of oldest uncommitted transaction)
Undo phase
The Undo phase with ADR completes almost instantaneously by using SLOG to undo non-versioned
operations and Persisted Version Store (PVS) with Logical Revert to perform row level version-based
Undo.

ADR recovery components


The four key components of ADR are:
Persisted version store (PVS)
The persisted version store is a new SQL Server database engine mechanism for persisting the row
versions generated in the database itself instead of the traditional tempdb version store. PVS enables
resource isolation as well as improves availability of readable secondaries.
Logical rever t
Logical revert is the asynchronous process responsible for performing row-level version-based Undo -
providing instant transaction rollback and undo for all versioned operations. Logical revert is
accomplished by:
Keeping track of all aborted transactions and marking them invisible to other transactions.
Performing rollback by using PVS for all user transactions, rather than physically scanning the
transaction log and undoing changes one at a time.
Releasing all locks immediately after transaction abort. Since abort involves simply marking changes
in memory, the process is very efficient and therefore locks do not have to be held for a long time.
SLOG
SLOG is a secondary in-memory log stream that stores log records for non-versioned operations (such
as metadata cache invalidation, lock acquisitions, and so on). The SLOG is:
Low volume and in-memory
Persisted on disk by being serialized during the checkpoint process
Periodically truncated as transactions commit
Accelerates redo and undo by processing only the non-versioned operations
Enables aggressive transaction log truncation by preserving only the required log records
Cleaner
The cleaner is the asynchronous process that wakes up periodically and cleans page versions that are not
needed.

Accelerated Database Recovery (ADR) patterns


The following types of workloads benefit most from ADR:
ADR is recommended for workloads with long running transactions.
ADR is recommended for workloads that have seen cases where active transactions are causing the
transaction log to grow significantly.
ADR is recommended for workloads that have experienced long periods of database unavailability due to
long running recovery (such as unexpected service restart or manual transaction rollback).

Best practices for Accelerated Database Recovery


Avoid long-running transactions in the database. Though one objective of ADR is to speed up database
recovery due to redo long active transactions, long-running transactions can delay version cleanup and
increase the size of the PVS.
Avoid large transactions with data definition changes or DDL operations. ADR uses a SLOG (system log
stream) mechanism to track DDL operations used in recovery. The SLOG is only used while the
transaction active. SLOG is checkpointed, so avoiding large transactions that use SLOG can help overall
performance. These scenarios can cause the SLOG to take up more space:
Many DDLs are executed in one transaction. For example, in one transaction, rapidly creating and
dropping temp tables.
A table has very large number of partitions/indexes that are modified. For example, a DROP TABLE
operation on such table would require a large reservation of SLOG memory, which would delay
truncation of the transaction log and delay undo/redo operations. The workaround can be drop
the indexes individually and gradually, then drop the table. For more information on the SLOG, see
ADR recovery components.
Prevent or reduce unnecessary aborted situations. A high abort rate will put pressure on the PVS cleaner
and lower ADR performance. The aborts may come from a high rate of deadlocks, duplicate keys, or
other constraint violations.
The sys.dm_tran_aborted_transactions DMV shows all aborted transactions on the SQL Server
instance. The nested_abort column indicates that the transaction committed but there are
portions that aborted (savepoints or nested transactions) which can block the PVS cleanup
process. For more information, see sys.dm_tran_aborted_transactions (Transact-SQL).
To activate the PVS cleanup process manually between workloads or during maintenance
windows, use sys.sp_persistent_version_cleanup . For more information, see
sys.sp_persistent_version_cleanup.
If you observe issues either with storage usage, high abort transaction and other factors, see
Troubleshooting Accelerated Database Recovery (ADR) on SQL Server.

Next steps
Accelerated database recovery
Troubleshooting Accelerated Database Recovery (ADR) on SQL Server.
Recover using automated database backups - Azure
SQL Database & SQL Managed Instance
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The following options are available for database recovery by using automated database backups. You can:
Create a new database on the same server, recovered to a specified point in time within the retention period.
Create a database on the same server, recovered to the deletion time for a deleted database.
Create a new database on any server in the same region, recovered to the point of the most recent backups.
Create a new database on any server in any other region, recovered to the point of the most recent replicated
backups. Cross-region and cross-subscription point-in-time restore for SQL Managed Instance isn't currently
supported.
If you configured backup long-term retention, you can also create a new database from any long-term retention
backup on any server.

IMPORTANT
You can't overwrite an existing database during restore.

When you're using the Standard or Premium service tier, your database restore might incur an extra storage
cost. The extra cost is incurred when the maximum size of the restored database is greater than the amount of
storage included with the target database's service tier and performance level. For pricing details of extra
storage, see the SQL Database pricing page. If the actual amount of used space is less than the amount of
storage included, you can avoid this extra cost by setting the maximum database size to the included amount.

Recovery time
The recovery time to restore a database by using automated database backups is affected by several factors:
The size of the database.
The compute size of the database.
The number of transaction logs involved.
The amount of activity that needs to be replayed to recover to the restore point.
The network bandwidth if the restore is to a different region.
The number of concurrent restore requests being processed in the target region.
For a large or very active database, the restore might take several hours. If there is a prolonged outage in a
region, it's possible that a high number of geo-restore requests will be initiated for disaster recovery. When
there are many requests, the recovery time for individual databases can increase. Most database restores finish
in less than 12 hours.
For a single subscription, there are limitations on the number of concurrent restore requests. These limitations
apply to any combination of point-in-time restores, geo-restores, and restores from long-term retention backup.
TIP
For Azure SQL Managed Instance system updates will take precedence over database restores in progress. All pending
restores in case of a system update on Managed Instance will be suspended and resumed once the update has been
applied. This system behavior might prolong the time of restores and might be especially impactful to long-running
restores. To achieve a predictable time of database restores, consider configuring maintenance window allowing
scheduling of system updates at a specific day/time, and consider running database restores outside of the scheduled
maintenance window day/time.

M A X # O F C O N C URREN T REQ UEST S M A X # O F C O N C URREN T REQ UEST S


DEP LO Y M EN T O P T IO N B EIN G P RO C ESSED B EIN G SUB M IT T ED

Single database (per 30 100


subscription)

Elastic pool (per pool) 4 2000

There isn't a built-in method to restore the entire server. For an example of how to accomplish this task, see
Azure SQL Database: Full server recovery.

IMPORTANT
To recover by using automated backups, you must be a member of the SQL Server Contributor role or SQL Managed
Instance Contributor role (depending on the recovery destination) in the subscription, or you must be the subscription
owner. For more information, see Azure RBAC: Built-in roles. You can recover by using the Azure portal, PowerShell, or the
REST API. You can't use Transact-SQL.

Point-in-time restore
You can restore a standalone, pooled, or instance database to an earlier point in time by using the Azure portal,
PowerShell, or the REST API. The request can specify any service tier or compute size for the restored database.
Ensure that you have sufficient resources on the server to which you are restoring the database.
When complete, the restore creates a new database on the same server as the original database. The restored
database is charged at normal rates, based on its service tier and compute size. You don't incur charges until the
database restore is complete.
You generally restore a database to an earlier point for recovery purposes. You can treat the restored database
as a replacement for the original database or use it as a data source to update the original database.

IMPORTANT
You can only run restore on the same server, cross-server restoration is not supported by Point-in-time restore.

Database replacement
If you intend the restored database to be a replacement for the original database, you should specify the
original database's compute size and service tier. You can then rename the original database and give the
restored database the original name by using the ALTER DATABASE command in T-SQL.
Data recover y
If you plan to retrieve data from the restored database to recover from a user or application error, you
need to write and execute a data recovery script that extracts data from the restored database and applies
to the original database. Although the restore operation may take a long time to complete, the restoring
database is visible in the database list throughout the restore process. If you delete the database during
the restore, the restore operation will be canceled and you will not be charged for the database that did
not complete the restore.
Point-in-time restore by using Azure portal
You can recover a single or instance database to a point in time from the overview blade of the database you
want to restore in the Azure portal.
SQL Database
To recover a database to a point in time by using the Azure portal, open the database overview page and select
Restore on the toolbar. Choose the backup source, and select the point-in-time backup point from which a new
database will be created.

SQL Managed Instance


To recover a managed instance database to a point in time by using the Azure portal, open the database
overview page, and select Restore on the toolbar. Choose the point-in-time backup point from which a new
database will be created.
TIP
To programmatically restore a database from a backup, see Programmatic recovery using automated backups.

Deleted database restore


You can restore a deleted database to the deletion time, or an earlier point in time, on the same server or the
same managed instance. You can accomplish this through the Azure portal, PowerShell, or the REST
(createMode=Restore). You restore a deleted database by creating a new database from the backup.

IMPORTANT
If you delete a server or managed instance, all its databases are also deleted and can't be recovered. You can't restore a
deleted server or managed instance.

Deleted database restore by using the Azure portal


You restore deleted databases from the Azure portal from the server or managed instance resource.

TIP
It may take several minutes for recently deleted databases to appear on the Deleted databases page in Azure portal, or
when displaying deleted databases programmatically.

SQL Database
To recover a deleted database to the deletion time by using the Azure portal, open the server overview page,
and select Deleted databases . Select a deleted database that you want to restore, and type the name for the
new database that will be created with data restored from the backup.

SQL Managed Instance


To recover a managed database by using the Azure portal, open the managed instance overview page, and
select Deleted databases . Select a deleted database that you want to restore, and type the name for the new
database that will be created with data restored from the backup.
Deleted database restore by using PowerShell
Use the following sample scripts to restore a deleted database for either SQL Database or SQL Managed
Instance by using PowerShell.
SQL Database
For a sample PowerShell script showing how to restore a deleted database in Azure SQL Database, see Restore a
database using PowerShell.
SQL Managed Instance
For a sample PowerShell script showing how to restore a deleted instance database, see Restore deleted
instance database using PowerShell

TIP
To programmatically restore a deleted database, see Programmatically performing recovery using automated backups.

Geo-restore
IMPORTANT
Geo-restore is available only for SQL databases or managed instances configured with geo-redundant backup storage.
If you are not currently using geo-replicated backups for a database, you can change this by configuring backup
storage redundancy.
Geo-restore can be performed on SQL databases or managed instances residing in the same subscription only.

You can restore a database on any SQL Database server or an instance database on any managed instance in
any Azure region from the most recent geo-replicated backups. Geo-restore uses a geo-replicated backup as its
source. You can request geo-restore even if the database or datacenter is inaccessible due to an outage.
Geo-restore is the default recovery option when your database is unavailable because of an incident in the
hosting region. You can restore the database to a server in any other region. There is a delay between when a
backup is taken and when it is geo-replicated to an Azure blob in a different region. As a result, the restored
database can be up to one hour behind the original database. The following illustration shows a database
restore from the last available backup in another region.
Geo -restore by using the Azure portal
From the Azure portal, you create a new single or managed instance database and select an available geo-
restore backup. The newly created database contains the geo-restored backup data.
SQL Database
To geo-restore a single database from the Azure portal in the region and server of your choice, follow these
steps:
1. From Dashboard , select Add > Create SQL Database . On the Basics tab, enter the required
information.
2. Select Additional settings .
3. For Use existing data , select Backup .
4. For Backup , select a backup from the list of available geo-restore backups.

Complete the process of creating a new database from the backup. When you create a database in Azure SQL
Database, it contains the restored geo-restore backup.
SQL Managed Instance
To geo-restore a managed instance database from the Azure portal to an existing managed instance in a region
of your choice, select a managed instance on which you want a database to be restored. Follow these steps:
1. Select New database .
2. Type a desired database name.
3. Under Use existing data , select Backup .
4. Select a backup from the list of available geo-restore backups.

Complete the process of creating a new database. When you create the instance database, it contains the
restored geo-restore backup.
Geo -restore by using PowerShell
SQL Database
For a PowerShell script that shows how to perform geo-restore for a single database, see Use PowerShell to
restore a single database to an earlier point in time.
SQL Managed Instance
For a PowerShell script that shows how to perform geo-restore for a managed instance database, see Use
PowerShell to restore a managed instance database to another geo-region.
Geo -restore considerations
You can't perform a point-in-time restore on a geo-secondary database. You can do so only on a primary
database. For detailed information about using geo-restore to recover from an outage, see Recover from an
outage.

IMPORTANT
Geo-restore is the most basic disaster-recovery solution available in SQL Database and SQL Managed Instance. It relies
on automatically created geo-replicated backups with a recovery point objective (RPO) up to 1 hour and an estimated
recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your
databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small
databases and is not critical to the business, geo-restore is an appropriate disaster-recovery solution.
For business-critical applications that require large databases and must ensure business continuity, use Auto-failover
groups. It offers a much lower RPO and recovery time objective, and the capacity is always guaranteed.
For more information about business continuity choices, see Overview of business continuity.

Programmatic recovery using automated backups


You can also use Azure PowerShell or the REST API for recovery. The following tables describe the set of
commands available.
PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by SQL Database and SQL Managed Instance, but all
future development is for the Az.Sql module. For these cmdlets, see AzureRM.Sql. Arguments for the commands in the Az
module and in Azure Resource Manager modules are to a great extent identical.

NOTE
Restore points represent a period between the earliest restore point and the latest log backup point. Information on latest
restore point is currently unavailable on Azure PowerShell.

SQL Database
To restore a standalone or pooled database, see Restore-AzSqlDatabase.

C M DL ET DESC RIP T IO N

Get-AzSqlDatabase Gets one or more databases.

Get-AzSqlDeletedDatabaseBackup Gets a deleted database that you can restore.

Get-AzSqlDatabaseGeoBackup Gets a geo-redundant backup of a database.

Restore-AzSqlDatabase Restores a database.

TIP
For a sample PowerShell script that shows how to perform a point-in-time restore of a database, see Restore a database
by using PowerShell.

SQL Managed Instance


To restore a managed instance database, see Restore-AzSqlInstanceDatabase.

C M DL ET DESC RIP T IO N

Get-AzSqlInstance Gets one or more managed instances.

Get-AzSqlInstanceDatabase Gets an instance database.

Restore-AzSqlInstanceDatabase Restores an instance database.

REST API
To restore a database by using the REST API:
API DESC RIP T IO N

REST (createMode=Recovery) Restores a database.

Get Create or Update Database Status Returns the status during a restore operation.

Azure CLI
SQL Database
To restore a database by using the Azure CLI, see az sql db restore.
SQL Managed Instance
To restore a managed instance database by using the Azure CLI, see az sql midb restore.

Summary
Automatic backups protect your databases from user and application errors, accidental database deletion, and
prolonged outages. This built-in capability is available for all service tiers and compute sizes.

Next steps
Business continuity overview
SQL Database automated backups
Long-term retention
To learn about faster recovery options, see Active geo-replication or Auto-failover groups.
Long-term retention - Azure SQL Database and
Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

Many applications have regulatory, compliance, or other business purposes that require you to retain database
backups beyond the 7-35 days provided by Azure SQL Database and Azure SQL Managed Instance automatic
backups. By using the long-term retention (LTR) feature, you can store specified SQL Database and SQL
Managed Instance full backups in Azure Blob storage with configured redundancy for up to 10 years. LTR
backups can then be restored as a new database.
Long-term retention can be enabled for Azure SQL Database and for Azure SQL Managed Instance. This article
provides a conceptual overview of long-term retention. To configure long-term retention, see Configure Azure
SQL Database LTR and Configure Azure SQL Managed Instance LTR.

NOTE
You can use SQL Agent jobs to schedule copy-only database backups as an alternative to LTR beyond 35 days.

How long-term retention works


Long-term backup retention (LTR) leverages the full database backups that are automatically created to enable
point in time restore (PITR). If an LTR policy is configured, these backups are copied to different blobs for long-
term storage. The copy is a background job that has no performance impact on the database workload. The LTR
policy for each database in SQL Database can also specify how frequently the LTR backups are created.
To enable LTR, you can define a policy using a combination of four parameters: weekly backup retention (W),
monthly backup retention (M), yearly backup retention (Y), and week of year (WeekOfYear). If you specify W, one
backup every week will be copied to the long-term storage. If you specify M, the first backup of each month will
be copied to the long-term storage. If you specify Y, one backup during the week specified by WeekOfYear will
be copied to the long-term storage. If the specified WeekOfYear is in the past when the policy is configured, the
first LTR backup will be created in the following year. Each backup will be kept in the long-term storage
according to the policy parameters that are configured when the LTR backup is created.

NOTE
Any change to the LTR policy applies only to future backups. For example, if weekly backup retention (W), monthly backup
retention (M), or yearly backup retention (Y) is modified, the new retention setting will only apply to new backups. The
retention of existing backups will not be modified. If your intention is to delete old LTR backups before their retention
period expires, you will need to manually delete the backups.

Examples of the LTR policy:


W=0, M=0, Y=5, WeekOfYear=3
The third full backup of each year will be kept for five years.
W=0, M=3, Y=0
The first full backup of each month will be kept for three months.
W=12, M=0, Y=0
Each weekly full backup will be kept for 12 weeks.
W=6, M=12, Y=10, WeekOfYear=20
Each weekly full backup will be kept for six weeks. Except first full backup of each month, which will be
kept for 12 months. Except the full backup taken on 20th week of year, which will be kept for 10 years.
The following table illustrates the cadence and expiration of the long-term backups for the following policy:
W=12 weeks (84 days), M=12 months (365 days), Y=10 years (3650 days), WeekOfYear=20 (week after May
13)

If you modify the above policy and set W=0 (no weekly backups), Azure only retains the monthly and yearly
backups. No weekly backups are stored under the LTR policy. The storage amount needed to keep these backups
reduces accordingly.
IMPORTANT
The timing of individual LTR backups is controlled by Azure. You cannot manually create an LTR backup or control the
timing of the backup creation. After configuring an LTR policy, it may take up to 7 days before the first LTR backup will
show up on the list of available backups.
If you delete a server or a managed instance, all databases on that server or managed instance are also deleted and can't
be recovered. You can't restore a deleted server or managed instance. However, if you had configured LTR for a database
or managed instance, LTR backups are not deleted, and they can be used to restore databases on a different server or
managed instance in the same subscription, to a point in time when an LTR backup was taken.

Geo-replication and long-term backup retention


If you're using active geo-replication or failover groups as your business continuity solution, you should prepare
for eventual failovers and configure the same LTR policy on the secondary database or instance. Your LTR
storage cost won't increase as backups aren't generated from the secondaries. The backups are only created
when the secondary becomes primary. It ensures non-interrupted generation of the LTR backups when the
failover is triggered and the primary moves to the secondary region.

NOTE
When the original primary database recovers from an outage that caused the failover, it will become a new secondary.
Therefore, the backup creation will not resume and the existing LTR policy will not take effect until it becomes the primary
again.

Configure long-term backup retention


You can configure long-term backup retention using the Azure portal and PowerShell for Azure SQL Database
and Azure SQL Managed Instance. To restore a database from the LTR storage, you can select a specific backup
based on its timestamp. The database can be restored to any existing server or managed instance under the
same subscription as the original database.
To learn how to configure long-term retention or restore a database from backup for SQL Database using the
Azure portal or PowerShell, see Manage Azure SQL Database long-term backup retention.
To learn how to configure long-term retention or restore a database from backup for SQL Managed Instance
using the Azure portal or PowerShell, see Manage Azure SQL Managed Instance long-term backup retention.

Next steps
Because database backups protect data from accidental corruption or deletion, they're an essential part of any
business continuity and disaster recovery strategy.
To learn about the other SQL Database business-continuity solutions, see Business continuity overview.
To learn about service-generated automatic backups, see automatic backups
Monitoring and performance tuning in Azure SQL
Database and Azure SQL Managed Instance
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


To monitor the performance of a database in Azure SQL Database and Azure SQL Managed Instance, start by
monitoring the CPU and IO resources used by your workload relative to the level of database performance you
chose in selecting a particular service tier and performance level. To accomplish this, Azure SQL Database and
Azure SQL Managed Instance emit resource metrics that can be viewed in the Azure portal or by using one of
these SQL Server management tools:
Azure Data Studio, based on Visual Studio Code.
SQL Server Management Studio (SSMS), based on Microsoft Visual Studio.

REQ UIRES A GEN T O N A


M O N ITO RIN G SO L UT IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E C USTO M ER- O W N ED VM

Query Performance Insight Yes No No

Monitor using DMVs Yes Yes No

Monitor using query store Yes Yes No

SQL Insights (preview) in Yes Yes Yes


Azure Monitor

Azure SQL Analytics Yes Yes No


(preview) using Azure
Monitor Logs *

* For solutions requiring low latency monitoring, Azure SQL Analytics (preview) is not recommended.

Database advisors in the Azure portal


Azure SQL Database provides a number of Database Advisors to provide intelligent performance tuning
recommendations and automatic tuning options to improve performance.
Additionally, the Query Performance Insight page shows you details about the queries responsible for the most
CPU and IO usage for single and pooled databases.
Query Performance Insight is available in the Azure portal in the Overview pane of your Azure SQL Database
under "Intelligent Performance". Use the automatically collected information to identify queries and begin
optimizing your workload performance.
You can also configure automatic tuning to implement these recommendations automatically, such as forcing
a query execution plan to prevent regression, or creating and dropping nonclustered indexes based on
workload patterns. Automatic tuning also is available in the Azure portal in the Overview pane of your Azure
SQL Database under "Intelligent Performance".
Azure SQL Database and Azure SQL Managed Instance provide advanced monitoring and tuning capabilities
backed by artificial intelligence to assist you in troubleshooting and maximizing the performance of your
databases and solutions. You can choose to configure the streaming export of these Intelligent Insights and
other database resource logs and metrics to one of several destinations for consumption and analysis.
Outside of the Azure portal, the database engine has its own monitoring and diagnostic capabilities that Azure
SQL Database and SQL Managed Instance leverage, such as query store and dynamic management views
(DMVs). See Monitoring using DMVs for scripts to monitor for a variety of performance issues in Azure SQL
Database and Azure SQL Managed Instance.
Azure SQL Insights (preview) and Azure SQL Analytics (preview)
Both offerings use different pipelines to present data to a variety of endpoints for coming Azure SQL Database
metrics.
Azure SQL Insights (preview) is project inside Azure Monitor that can provide advanced insights into
Azure SQL database activity. It is deployed via a customer-managed VM using Telegraf as a collection
agent that connects to SQL sources, collects data, and moves data into Log Analytics.
Azure SQL Analytics (preview) also requires Log Analytics to provide advanced insights into Azure SQL
database activity.
Azure diagnostic telemetry is a separate, streaming source of data for Azure SQL Database and Azure
SQL Managed Instance. Not to be confused with the Azure SQL Insights (preview) product, SQLInsights is
a log inside Intelligent Insights, and is one of several packages of telemetry emitted by Azure diagnostic
settings. Diagnostic settings are a feature that contains Resource Log categories (formerly known as
Diagnostic Logs). For more information, see Diagnostic telemetry for export.
Azure SQL Analytics (preview) consumes the resource logs coming from the diagnostic telemetry
(configurable under Diagnostic Settings in the Azure portal), while Azure SQL Insights (preview)
uses a different pipeline to collect Azure SQL telemetry.
Monitoring and diagnostic telemetry
The following diagram details all the database engine, platform metrics, resource logs, and Azure activity logs
generated by Azure SQL products, how they are processed, and how they can be surfaced for analysis.
Monitor and tune Azure SQL in the Azure portal
In the Azure portal, Azure SQL Database and Azure SQL Managed Instance provide monitoring of resource
metrics. Azure SQL Database provides database advisors, and Query Performance Insight provides query tuning
recommendations and query performance analysis. In the Azure portal, you can enable automatic tuning for
logical SQL servers and their single and pooled databases.

NOTE
Databases with extremely low usage may show in the portal with less than actual usage. Due to the way telemetry is
emitted when converting a double value to the nearest integer certain usage amounts less than 0.5 will be rounded to 0
which causes a loss in granularity of the emitted telemetry. For details, see Low database and elastic pool metrics
rounding to zero.

Azure SQL Database and Azure SQL Managed Instance resource monitoring
You can quickly monitor a variety of resource metrics in the Azure portal in the Metrics view. These metrics
enable you to see if a database is reaching 100% of processor, memory, or IO resources. High DTU or processor
percentage, as well as high IO percentage, indicates that your workload might need more CPU or IO resources. It
might also indicate queries that need to be optimized.
Database advisors in Azure SQL Database
Azure SQL Database includes database advisors that provide performance tuning recommendations for single
and pooled databases. These recommendations are available in the Azure portal as well as by using PowerShell.
You can also enable automatic tuning so that Azure SQL Database can automatically implement these tuning
recommendations.
Query Performance Insight in Azure SQL Database
Query Performance Insight shows the performance in the Azure portal of top consuming and longest running
queries for single and pooled databases.
Low database and elastic pool metrics rounding to zero
Starting in September 2020, databases with extremely low usage may show in the portal with less than actual
usage. Due to the way telemetry is emitted when converting a double value to the nearest integer certain usage
amounts less than 0.5 will be rounded to 0, which causes a loss in granularity of the emitted telemetry.
For example: Consider a 1-minute window with the following four data points: 0.1, 0.1, 0.1, 0.1, these low values
are rounded down to 0, 0, 0, 0 and present an average of 0. If any of the data points are greater than 0.5, for
example: 0.1, 0.1, 0.9, 0.1, they are rounded to 0, 0, 1, 0 and show an avg of 0.25.
Affected database metrics:
cpu_percent
log_write_percent
workers_percent
sessions_percent
physical_data_read_percent
dtu_consumption_percent2
xtp_storage_percent
Affected elastic pool metrics:
cpu_percent
physical_data_read_percent
log_write_percent
memory_usage_percent
data_storage_percent
peak_worker_percent
peak_session_percent
xtp_storage_percent
allocated_data_storage_percent

Generate intelligent assessments of performance issues


Intelligent Insights for Azure SQL Database and Azure SQL Managed Instance uses built-in intelligence to
continuously monitor database usage through artificial intelligence and detect disruptive events that cause poor
performance. Intelligent Insights automatically detects performance issues with databases based on query
execution wait times, errors, or time-outs. Once detected, a detailed analysis is performed by Intelligent Insights
that generates a resource log called SQLInsights (unrelated to the Azure Monitor SQL Insights (preview)).
SQLInsights is an intelligent assessment of the issues. This assessment consists of a root cause analysis of the
database performance issue and, where possible, recommendations for performance improvements.
Intelligent Insights is a unique capability of Azure built-in intelligence that provides the following value:
Proactive monitoring
Tailored performance insights
Early detection of database performance degradation
Root cause analysis of issues detected
Performance improvement recommendations
Scale out capability on hundreds of thousands of databases
Positive impact to DevOps resources and the total cost of ownership

Enable the streaming export of metrics and resource logs


You can enable and configure the streaming export of diagnostic telemetry to one of several destinations,
including the Intelligent Insights resource log.
You configure diagnostic settings to stream categories of metrics and resource logs for single databases, pooled
databases, elastic pools, managed instances, and instance databases to one of the following Azure resources.
Log Analytics workspace in Azure Monitor
You can stream metrics and resource logs to a Log Analytics workspace in Azure Monitor. Data streamed here
can be consumed by SQL Analytics (preview), which is a cloud only monitoring solution that provides intelligent
monitoring of your databases that includes performance reports, alerts, and mitigation recommendations. Data
streamed to a Log Analytics workspace can be analyzed with other monitoring data collected and also enables
you to leverage other Azure Monitor features such as alerts and visualizations.

NOTE
Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in
active development. Monitor your SQL deployments with SQL Insights (preview).

Azure Event Hubs


You can stream metrics and resource logs to Azure Event Hubs. Streaming diagnostic telemetry to event hubs to
provide the following functionality:
Stream logs to third-par ty logging and telemetr y systems
Stream all of your metrics and resource logs to a single event hub to pipe log data to a third-party SIEM
or log analytics tool.
Build a custom telemetr y and logging platform
The highly scalable publish-subscribe nature of event hubs allows you to flexibly ingest metrics and
resource logs into a custom telemetry platform. See Designing and Sizing a Global Scale Telemetry
Platform on Azure Event Hubs for details.
View ser vice health by streaming data to Power BI
Use Event Hubs, Stream Analytics, and Power BI to transform your diagnostics data into near real-time
insights on your Azure services. See Stream Analytics and Power BI: A real-time analytics dashboard for
streaming data for details on this solution.
Azure Storage
Stream metrics and resource logs to Azure Storage. Use Azure storage to archive vast amounts of diagnostic
telemetry for a fraction of the cost of the previous two streaming options.

Use Extended Events


Additionally, you can use Extended Events for advanced monitoring and troubleshooting in SQL Server, Azure
SQL Database, and Azure SQL Managed Instance. Extended Events is a "tracing" tool and event architecture,
superior to SQL Trace, that enables users to collect as much or as little data as is necessary to troubleshoot or
identify a performance problem, while mitigating impact to ongoing application performance. Extended Events
replace deprecated SQL Trace and SQL Server Profiler features. For information about using extended events in
Azure SQL Database, see Extended events in Azure SQL Database. In Azure SQL Database and SQL Managed
Instance, use an Event File target hosted in Azure Blob Storage.

Next steps
For more information about intelligent performance recommendations for single and pooled databases, see
Database advisor performance recommendations.
For more information about automatically monitoring database performance with automated diagnostics
and root cause analysis of performance issues, see Azure SQL Intelligent Insights.
Monitor your SQL deployments with SQL Insights (preview)
Intelligent Insights using AI to monitor and
troubleshoot database performance (preview)
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Intelligent Insights in Azure SQL Database and Azure SQL Managed Instance lets you know what is happening
with your database performance.
Intelligent Insights uses built-in intelligence to continuously monitor database usage through artificial
intelligence and detect disruptive events that cause poor performance. Once detected, a detailed analysis is
performed that generates an Intelligent Insights resource log called SQLInsights (unrelated to Azure Monitor
SQL Insights (preview)) with an intelligent assessment of the issues. This assessment consists of a root cause
analysis of the database performance issue and, where possible, recommendations for performance
improvements.

What can Intelligent Insights do for you?


Intelligent Insights is a unique capability of Azure built-in intelligence that provides the following value:
Proactive monitoring
Tailored performance insights
Early detection of database performance degradation
Root cause analysis of issues detected
Performance improvement recommendations
Scale out capability on hundreds of thousands of databases
Positive impact to DevOps resources and the total cost of ownership

How does Intelligent Insights work


Intelligent Insights analyzes database performance by comparing the database workload from the last hour with
the past seven-day baseline workload. Database workload is composed of queries determined to be the most
significant to the database performance, such as the most repeated and largest queries. Because each database
is unique based on its structure, data, usage, and application, each workload baseline that is generated is specific
and unique to that workload. Intelligent Insights, independent of the workload baseline, also monitors absolute
operational thresholds and detects issues with excessive wait times, critical exceptions, and issues with query
parameterizations that might affect performance.
After a performance degradation issue is detected from multiple observed metrics by using artificial
intelligence, analysis is performed. A diagnostics log is generated with an intelligent insight on what is
happening with your database. Intelligent Insights makes it easy to track the database performance issue from
its first appearance until resolution. Each detected issue is tracked through its lifecycle from initial issue
detection and verification of performance improvement to its completion.
The metrics used to measure and detect database performance issues are based on query duration, timeout
requests, excessive wait times, and errored requests. For more information on metrics, see Detection metrics.
Identified database performance degradations are recorded in the Intelligent Insights SQLInsights log with
intelligent entries that consist of the following properties:

P RO P ERT Y DETA IL S

Database information Metadata about a database on which an insight was


detected, such as a resource URI.

Observed time range Start and end time for the period of the detected insight.

Impacted metrics Metrics that caused an insight to be generated:


Query duration increase [seconds].
Excessive waiting [seconds].
Timed-out requests [percentage].
Errored-out requests [percentage].

Impact value Value of a metric measured.


P RO P ERT Y DETA IL S

Impacted queries and error codes Query hash or error code. These can be used to easily
correlate to affected queries. Metrics that consist of either
query duration increase, waiting time, timeout counts, or
error codes are provided.

Detections Detection identified at the database during the time of an


event. There are 15 detection patterns. For more
information, see Troubleshoot database performance issues
with Intelligent Insights.

Root cause analysis Root cause analysis of the issue identified in a human-
readable format. Some insights might contain a performance
improvement recommendation where possible.

Intelligent Insights shines in discovering and troubleshooting database performance issues. In order to use
Intelligent Insights to troubleshoot database performance issues, see Troubleshoot performance issues with
Intelligent Insights.

Intelligent Insights options


Intelligent Insights options available are:

A Z URE SQ L M A N A GED IN STA N C E


IN T EL L IGEN T IN SIGH T S O P T IO N A Z URE SQ L DATA B A SE SUP P O RT SUP P O RT

Configure Intelligent Insights - Yes Yes


Configure Intelligent Insights analysis
for your databases.

Stream insights to Azure SQL Yes Yes


Analytics -- Stream insights to Azure
SQL Analytics.

Stream insights to Azure Event Yes Yes


Hubs - Stream insights to Event Hubs
for further custom integrations.

Stream insights to Azure Storage Yes Yes


- Stream insights to Azure Storage for
further analysis and long-term
archival.

NOTE
Intelligent insights is a preview feature, not available in the following regions: West Europe, North Europe, West US 1 and
East US 1.

Configure the export of the Intelligent Insights log


Output of the Intelligent Insights can be streamed to one of several destinations for analysis:
Output streamed to a Log Analytics workspace can be used with Azure SQL Analytics to view insights
through the user interface of the Azure portal. This is the integrated Azure solution, and the most typical way
to view insights.
Output streamed to Azure Event Hubs can be used for development of custom monitoring and alerting
scenarios
Output streamed to Azure Storage can be used for custom application development, such are for example
custom reporting, long-term data archival and so forth.
Integration of Azure SQL Analytics, Azure Event Hubs, Azure Storage, or third-party products for consumption is
performed through first enabling Intelligent Insights logging (the "SQLInsights" log) in the Diagnostic settings
blade of a database, and then configuring Intelligent Insights log data to be streamed into one of these
destinations.
For more information on how to enable Intelligent Insights logging and to configure metric and resource log
data to be streamed to a consuming product, see Metrics and diagnostics logging.
Set up with Azure SQL Analytics
Azure SQL Analytics solution provides graphical user interface, reporting and alerting capabilities on database
performance, using the Intelligent Insights resource log data.
Add Azure SQL Analytics to your Azure portal dashboard from the marketplace and to create a workspace, see
configure Azure SQL Analytics
To use Intelligent Insights with Azure SQL Analytics, configure Intelligent Insights log data to be streamed to
Azure SQL Analytics workspace you've created in the previous step, see Metrics and diagnostics logging.
The following example shows an Intelligent Insights viewed through Azure SQL Analytics:

Set up with Event Hubs


To use Intelligent Insights with Event Hubs, configure Intelligent Insights log data to be streamed to Event Hubs,
see Metrics and diagnostics logging and Stream Azure diagnostics logs to Event Hubs.
To use Event Hubs to set up custom monitoring and alerting, see What to do with metrics and diagnostics logs
in Event Hubs.
Set up with Azure Storage
To use Intelligent Insights with Storage, configure Intelligent Insights log data to be streamed to Storage, see
Metrics and diagnostics logging and Stream into Azure Storage.
Custom integrations of Intelligent Insights log
To use Intelligent Insights with third-party tools, or for custom alerting and monitoring development, see Use
the Intelligent Insights database performance diagnostics log.

Detection metrics
Metrics used for detection models that generate Intelligent Insights are based on monitoring:
Query duration
Timeout requests
Excessive wait time
Errored out requests
Query duration and timeout requests are used as primary models in detecting issues with database workload
performance. They're used because they directly measure what is happening with the workload. To detect all
possible cases of workload performance degradation, excessive wait time and errored-out requests are used as
additional models to indicate issues that affect the workload performance.
The system automatically considers changes to the workload and changes in the number of query requests
made to the database to dynamically determine normal and out-of-the-ordinary database performance
thresholds.
All of the metrics are considered together in various relationships through a scientifically derived data model
that categorizes each performance issue detected. Information provided through an intelligent insight includes:
Details of the performance issue detected.
A root cause analysis of the issue detected.
Recommendations on how to improve the performance of the monitored database, where possible.

Query duration
The query duration degradation model analyzes individual queries and detects the increase in the time it takes
to compile and execute a query compared to the performance baseline.
If built-in intelligence detects a significant increase in query compile or query execution time that affects
workload performance, these queries are flagged as query duration performance degradation issues.
The Intelligent Insights diagnostics log outputs the query hash of the query degraded in performance. The query
hash indicates whether the performance degradation was related to query compile or execution time increase,
which increased query duration time.

Timeout requests
The timeout requests degradation model analyzes individual queries and detects any increase in timeouts at the
query execution level and the overall request timeouts at the database level compared to the performance
baseline period.
Some of the queries might time out even before they reach the execution stage. Through the means of aborted
workers vs. requests made, built-in intelligence measures and analyzes all queries that reached the database
whether they got to the execution stage or not.
After the number of timeouts for executed queries or the number of aborted request workers crosses the
system-managed threshold, a diagnostics log is populated with intelligent insights.
The insights generated contain the number of timed-out requests and the number of timed-out queries.
Indication of the performance degradation is related to timeout increase at the execution stage, or the overall
database level is provided. When the increase in timeouts is deemed significant to database performance, these
queries are flagged as timeout performance degradation issues.

Excessive wait times


The excessive wait time model monitors individual database queries. It detects unusually high query wait stats
that crossed the system-managed absolute thresholds. The following query excessive wait-time metrics are
observed by using, Query Store Wait Stats (sys.query_store_wait_stats):
Reaching resource limits
Reaching elastic pool resource limits
Excessive number of worker or session threads
Excessive database locking
Memory pressure
Other wait stats
Reaching resource limits or elastic pool resource limits denote that consumption of available resources on a
subscription or in the elastic pool crossed absolute thresholds. These stats indicate workload performance
degradation. An excessive number of worker or session threads denotes a condition in which the number of
worker threads or sessions initiated crossed absolute thresholds. These stats indicate workload performance
degradation.
Excessive database locking denotes a condition in which the count of locks on a database has crossed absolute
thresholds. This stat indicates a workload performance degradation. Memory pressure is a condition in which
the number of threads requesting memory grants crossed an absolute threshold. This stat indicates a workload
performance degradation.
Other wait stats detection indicates a condition in which miscellaneous metrics measured through the Query
Store Wait Stats crossed an absolute threshold. These stats indicate workload performance degradation.
After excessive wait times are detected, depending on the data available, the Intelligent Insights diagnostics log
outputs hashes of the affecting and affected queries degraded in performance, details of the metrics that cause
queries to wait in execution, and measured wait time.

Errored requests
The errored requests degradation model monitors individual queries and detects an increase in the number of
queries that errored out compared to the baseline period. This model also monitors critical exceptions that
crossed absolute thresholds managed by built-in intelligence. The system automatically considers the number of
query requests made to the database and accounts for any workload changes in the monitored period.
When the measured increase in errored requests relative to the overall number of requests made is deemed
significant to workload performance, affected queries are flagged as errored requests performance degradation
issues.
The Intelligent Insights log outputs the count of errored requests. It indicates whether the performance
degradation was related to an increase in errored requests or to crossing a monitored critical exception
threshold and measured time of the performance degradation.
If any of the monitored critical exceptions cross the absolute thresholds managed by the system, an intelligent
insight is generated with critical exception details.

Next steps
Learn how to Monitor databases by using SQL Analytics.
Learn how to Troubleshoot performance issues with Intelligent Insights.
Monitor your SQL deployments with SQL Insights
(preview)
7/12/2022 • 6 minutes to read • Edit Online

SQL Insights (preview) is a comprehensive solution for monitoring any product in the Azure SQL family. SQL
Insights uses dynamic management views to expose the data that you need to monitor health, diagnose
problems, and tune performance.
SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to
your SQL resources and remotely gather data. The gathered data is stored in Azure Monitor Logs to enable easy
aggregation, filtering, and trend analysis. You can view the collected data from the SQL Insights workbook
template, or you can delve directly into the data by using log queries. The following diagram details the steps
taken by information from the database engine and Azure resource logs, and how they can be surfaced. For a
more detailed diagram of Azure SQL logging, see Monitoring and diagnostic telemetry.

Pricing
There is no direct cost for SQL Insights (preview). All costs are incurred by the virtual machines that gather the
data, the Log Analytics workspaces that store the data, and any alert rules configured on the data.
Virtual machines
For virtual machines, you're charged based on the pricing published on the virtual machines pricing page. The
number of virtual machines that you need will vary based on the number of connection strings you want to
monitor. We recommend allocating one virtual machine of size Standard_B2s for every 100 connection strings.
See Azure virtual machine requirements for more details.
Log Analytics workspaces
For the Log Analytics workspaces, you're charged based on the pricing published on the Azure Monitor pricing
page. The Log Analytics workspaces that SQL Insights uses will incur costs for data ingestion, data retention, and
(optionally) data export.
Exact charges will vary based on the amount of data ingested, retained, and exported. The amount of this data
will vary based on your database activity and the collection settings defined in your monitoring profiles.
Alert rules
For alert rules in Azure Monitor, you're charged based on the pricing published on the Azure Monitor pricing
page. If you choose to create alerts with SQL Insights (preview), you're charged for any alert rules created and
any notifications sent.

Supported versions
SQL Insights (preview) supports the following versions of SQL Server:
SQL Server 2012 and newer
SQL Insights (preview) supports SQL Server running in the following environments:
Azure SQL Database
Azure SQL Managed Instance
SQL Server on Azure Virtual Machines (SQL Server running on virtual machines registered with the SQL
virtual machine provider)
Azure VMs (SQL Server running on virtual machines not registered with the SQL virtual machine provider)
SQL Insights (preview) has no support or has limited support for the following:
Non-Azure instances : SQL Server running on virtual machines outside Azure is not supported.
Azure SQL Database elastic pools : Metrics can't be gathered for elastic pools or for databases within
elastic pools.
Azure SQL Database low ser vice tiers : Metrics can't be gathered for databases on Basic, S0, S1, and S2
service tiers.
Azure SQL Database ser verless tier : Metrics can be gathered for databases through the serverless
compute tier. However, the process of gathering metrics will reset the auto-pause delay timer, preventing the
database from entering an auto-paused state.
Secondar y replicas : Metrics can be gathered for only a single secondary replica per database. If a database
has more than one secondary replica, only one can be monitored.
Authentication with Azure Active Director y : The only supported method of authentication for
monitoring is SQL authentication. For SQL Server on Azure Virtual Machines, authentication through Active
Directory on a custom domain controller is not supported.

Regional availability
SQL Insights (preview) is available in all Azure regions where Azure Monitor is available, with the exception of
Azure government and national clouds.

Open SQL Insights


To open SQL Insights:
1. In the Azure portal, go to the Azure Monitor menu.
2. In the Insights section, select SQL (preview) .
3. Select a tile to load the experience for the SQL resource that you're monitoring.
For more instructions, see Enable SQL Insights (preview) and Troubleshoot SQL Insights (preview).

Collected data
SQL Insights performs all monitoring remotely. No agents are installed on the virtual machines running SQL
Server.
SQL Insights uses dedicated monitoring virtual machines to remotely collect data from your SQL resources.
Each monitoring virtual machine has the Azure Monitor agent and the Workload Insights (WLI) extension
installed.
The WLI extension includes the open-source Telegraf agent. SQL Insights uses data collection rules to specify the
data collection settings for Telegraf's SQL Server plug-in.
Different sets of data are available for Azure SQL Database, Azure SQL Managed Instance, and SQL Server. The
following tables describe the available data. You can customize which datasets to collect and the frequency of
collection when you create a monitoring profile.
The tables have the following columns:
Friendly name : Name of the query as shown in the Azure portal when you're creating a monitoring profile.
Configuration name : Name of the query as shown in the Azure portal when you're editing a monitoring
profile.
Namespace : Name of the query as found in a Log Analytics workspace. This identifier appears in the
InsighstMetrics table on the Namespace property in the Tags column.
DMVs : Dynamic managed views that are used to produce the dataset.
Enabled by default : Whether the data is collected by default.
Default collection frequency : How often the data is collected by default.
Data for Azure SQL Database
DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y

DB wait stats AzureSQLDBWait sqlserver_azured sys.dm_db_wait_ No Not applicable


Stats b_waitstats stats

DBO wait stats AzureSQLDBOs sqlserver_waitsta sys.dm_os_wait_s Yes 60 seconds


Waitstats ts tats

Memory clerks AzureSQLDBMe sqlserver_memor sys.dm_os_mem Yes 60 seconds


moryClerks y_clerks ory_clerks

Database I/O AzureSQLDBDat sqlserver_databa sys.dm_io_virtual Yes 60 seconds


abaseIO se_io _file_stats
sys.database_file
s
tempdb.sys.data
base_files

Server AzureSQLDBServ sqlserver_server_ sys.dm_os_job_o Yes 60 seconds


properties erProperties properties bject
sys.database_file
s
sys.[databases]
sys.
[database_servic
e_objectives]

Performance AzureSQLDBPerf sqlserver_perfor sys.dm_os_perfor Yes 60 seconds


counters ormanceCounter mance mance_counters
s sys.databases

Resource stats AzureSQLDBRes sqlserver_azure_ sys.dm_db_resou Yes 60 seconds


ourceStats db_resource_stat rce_stats
s

Resource AzureSQLDBRes sqlserver_db_res sys.dm_user_db_ Yes 60 seconds


governance ourceGovernanc ource_governanc resource_govern
e e ance

Requests AzureSQLDBReq sqlserver_reques sys.dm_exec_sess No Not applicable


uests ts ions
sys.dm_exec_req
uests
sys.dm_exec_sql_
text

Schedulers AzureSQLDBSch sqlserver_schedu sys.dm_os_sched No Not applicable


edulers lers ulers

Data for Azure SQL Managed Instance


DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y

Wait stats AzureSQLMIOs sqlserver_waitsta sys.dm_os_wait_s Yes 60 seconds


Waitstats ts tats
DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y

Memory clerks AzureSQLMIMe sqlserver_memor sys.dm_os_mem Yes 60 seconds


moryClerks y_clerks ory_clerks

Database I/O AzureSQLMIDat sqlserver_databa sys.dm_io_virtual Yes 60 seconds


abaseIO se_io _file_stats
sys.master_files

Server AzureSQLMIServ sqlserver_server_ sys.server_resour Yes 60 seconds


properties erProperties properties ce_stats

Performance AzureSQLMIPerf sqlserver_perfor sys.dm_os_perfor Yes 60 seconds


counters ormanceCounter mance mance_counters
s sys.databases

Resource stats AzureSQLMIReso sqlserver_azure_ sys.server_resour Yes 60 seconds


urceStats db_resource_stat ce_stats
s

Resource AzureSQLMIReso sqlserver_instanc sys.dm_instance_ Yes 60 seconds


governance urceGovernance e_resource_gove resource_govern
rnance ance

Requests AzureSQLMIReq sqlserver_reques sys.dm_exec_sess No NA


uests ts ions
sys.dm_exec_req
uests
sys.dm_exec_sql_
text

Schedulers AzureSQLMISche sqlserver_schedu sys.dm_os_sched No Not applicable


dulers lers ulers

Data for SQL Server


DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y

Wait stats SQLServerWaitSt sqlserver_waitsta sys.dm_os_wait_s Yes 60 seconds


atsCategorized ts tats

Memory clerks SQLServerMemo sqlserver_memor sys.dm_os_mem Yes 60 seconds


ryClerks y_clerks ory_clerks

Database I/O SQLServerDatab sqlserver_databa sys.dm_io_virtual Yes 60 seconds


aseIO se_io _file_stats
sys.master_files

Server SQLServerProper sqlserver_server_ sys.dm_os_sys_in Yes 60 seconds


properties ties properties fo

Performance SQLServerPerfor sqlserver_perfor sys.dm_os_perfor Yes 60 seconds


counters manceCounters mance mance_counters
DEFA ULT
C O N F IGURAT IO EN A B L ED B Y C O L L EC T IO N
F RIEN DLY N A M E N NAME N A M ESPA C E DM VS DEFA ULT F REQ UEN C Y

Volume space SQLServerVolum sqlserver_volume sys.master_files Yes 60 seconds


eSpace _space

SQL Server CPU SQLServerCpu sqlserver_cpu sys.dm_os_ring_ Yes 60 seconds


buffers

Schedulers SQLServerSched sqlserver_schedu sys.dm_os_sched No Not applicable


ulers lers ulers

Requests SQLServerReque sqlserver_reques sys.dm_exec_sess No Not applicable


sts ts ions
sys.dm_exec_req
uests
sys.dm_exec_sql_
text

Availability SQLServerAvaila sqlserver_hadr_r sys.dm_hadr_ava No 60 seconds


replica states bilityReplicaState eplica_states ilability_replica_st
s ates
sys.availability_re
plicas
sys.availability_gr
oups
sys.dm_hadr_ava
ilability_group_st
ates

Availability SQLServerDatab sqlserver_hadr_d sys.dm_hadr_dat No 60 seconds


database replicas aseReplicaStates breplica_states abase_replica_sta
tes
sys.availability_re
plicas

Next steps
For frequently asked questions about SQL Insights (preview), see Frequently asked questions.
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Enable SQL Insights (preview)
7/12/2022 • 10 minutes to read • Edit Online

This article describes how to enable SQL Insights (preview) to monitor your SQL deployments. Monitoring is
performed from an Azure virtual machine that makes a connection to your SQL deployments and uses Dynamic
Management Views (DMVs) to gather monitoring data. You can control what datasets are collected and the
frequency of collection using a monitoring profile.

NOTE
To enable SQL Insights (preview) by creating the monitoring profile and virtual machine using a resource manager
template, see Resource Manager template samples for SQL Insights (preview).

To learn how to enable SQL Insights (preview), you can also refer to this Data Exposed episode.

Create Log Analytics workspace


SQL Insights stores its data in one or more Log Analytics workspaces. Before you can enable SQL Insights, you
need to either create a workspace or select an existing one. A single workspace can be used with multiple
monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and
access the features in SQL Insights, you must have the Log Analytics contributor role in the workspace.

Create monitoring user


You need a user (login) on the SQL deployments that you want to monitor. Follow the procedures below for
different types of SQL deployments.
The instructions below cover the process per type of SQL that you can monitor. To accomplish this with a script
on several SQL resources at once, please refer to the following README file and example script.
Azure SQL Database

NOTE
SQL Insights (preview) does not support the following Azure SQL Database scenarios:
Elastic pools : Metrics cannot be gathered for elastic pools. Metrics cannot be gathered for databases within elastic
pools.
Low ser vice tiers : Metrics cannot be gathered for databases on Basic, S0, S1, and S2 service tiers
SQL Insights (preview) has limited support for the following Azure SQL Database scenarios:
Ser verless tier : Metrics can be gathered for databases using the serverless compute tier. However, the process of
gathering metrics will reset the auto-pause delay timer, preventing the database from entering an auto-paused state.

Connect to an Azure SQL database with SQL Server Management Studio, Query Editor (preview) in the Azure
portal, or any other SQL client tool.
Run the following script to create a user with the required permissions. Replace user with a username and
mystrongpassword with a strong password.

CREATE USER [user] WITH PASSWORD = N'mystrongpassword';


GO
GRANT VIEW DATABASE STATE TO [user];
GO

Verify the user was created.

select name as username,


create_date,
modify_date,
type_desc as type,
authentication_type_desc as authentication_type
from sys.database_principals
where type not in ('A', 'G', 'R', 'X')
and sid is not null
order by username

Azure SQL Managed Instance


Connect to your Azure SQL Managed Instance using SQL Server Management Studio or a similar tool, and
execute the following script to create the monitoring user with the permissions needed. Replace user with a
username and mystrongpassword with a strong password.

USE master;
GO
CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GO
GRANT VIEW SERVER STATE TO [user];
GO
GRANT VIEW ANY DEFINITION TO [user];
GO

SQL Server
Connect to SQL Server on your Azure virtual machine and use SQL Server Management Studio or a similar tool
to run the following script to create the monitoring user with the permissions needed. Replace user with a
username and mystrongpassword with a strong password.

USE master;
GO
CREATE LOGIN [user] WITH PASSWORD = N'mystrongpassword';
GO
GRANT VIEW SERVER STATE TO [user];
GO
GRANT VIEW ANY DEFINITION TO [user];
GO

Verify the user was created.

select name as username,


create_date,
modify_date,
type_desc as type
from sys.server_principals
where type not in ('A', 'G', 'R', 'X')
and sid is not null
order by username

Create Azure Virtual Machine


You will need to create one or more Azure virtual machines that will be used to collect data to monitor SQL.

NOTE
The monitoring profiles specifies what data you will collect from the different types of SQL you want to monitor. Each
monitoring virtual machine can have only one monitoring profile associated with it. If you have a need for multiple
monitoring profiles, then you need to create a virtual machine for each.

Azure virtual machine requirements


The Azure virtual machine has the following requirements:
Operating system: Ubuntu 18.04 using Azure Marketplace image. Custom images are not supported.
Recommended minimum Azure virtual machine sizes: Standard_B2s (2 CPUs, 4-GiB memory)
Deployed in any Azure region supported by the Azure Monitor agent, and meeting all Azure Monitor agent
prerequisites.
NOTE
The Standard_B2s (2 CPUs, 4 GiB memory) virtual machine size will support up to 100 connection strings. You shouldn't
allocate more than 100 connections to a single virtual machine.

Depending upon the network settings of your SQL resources, the virtual machines may need to be placed in the
same virtual network as your SQL resources so they can make network connections to collect monitoring data.

Configure network settings


Each type of SQL offers methods for your monitoring virtual machine to securely access SQL. The sections
below cover the options based upon the SQL deployment type.
Azure SQL Database
SQL Insights supports accessing your Azure SQL Database via its public endpoint as well as from its virtual
network.
For access via the public endpoint, you would add a rule under the Firewall settings page and the IP firewall
settings section. For specifying access from a virtual network, you can set virtual network firewall rules and set
the service tags required by the Azure Monitor agent. This article describes the differences between these two
types of firewall rules.

Azure SQL Managed Instance


If your monitoring virtual machine will be in the same VNet as your SQL MI resources, then see Connect inside
the same VNet. If your monitoring virtual machine will be in the different VNet than your SQL MI resources,
then see Connect inside a different VNet.
SQL Server
If your monitoring virtual machine is in the same VNet as your SQL virtual machine resources, then see Connect
to SQL Server within a virtual network. If your monitoring virtual machine will be in the different VNet than
your SQL virtual machine resources, then see Connect to SQL Server over the internet.

Store monitoring password in Azure Key Vault


As a security best practice, we strongly recommend that you store your SQL user (login) passwords in a Key
Vault, rather than entering them directly into your monitoring profile connection strings.
When settings up your profile for SQL monitoring, you will need one of the following permissions on the Key
Vault resource you intend to use:
Microsoft.Authorization/roleAssignments/write
Microsoft.Authorization/roleAssignments/delete
If you have these permissions, a new Key Vault access policy will be automatically created as part of creating
your SQL Monitoring profile that uses the Key Vault you specified.

IMPORTANT
You need to ensure that network and security configuration allows the monitoring VM to access Key Vault. For more
information, see Access Azure Key Vault behind a firewall and Configure Azure Key Vault networking settings.

Create SQL monitoring profile


Open SQL Insights (preview) by selecting SQL (preview) from the Insights section of the Azure Monitor
menu in the Azure portal. Select Create new profile .

The profile will store the information that you want to collect from your SQL systems. It has specific settings for:
Azure SQL Database
Azure SQL Managed Instance
SQL Server running on virtual machines
For example, you might create one profile named SQL Production and another named SQL Staging with
different settings for frequency of data collection, what data to collect, and which workspace to send the data to.
The profile is stored as a data collection rule resource in the subscription and resource group you select. Each
profile needs the following:
Name. Cannot be edited once created.
Location. This is an Azure region.
Log Analytics workspace to store the monitoring data.
Collection settings for the frequency and type of sql monitoring data to collect.

NOTE
The location of the profile should be in the same location as the Log Analytics workspace you plan to send the monitoring
data to.

Select Create monitoring profile once you've entered the details for your monitoring profile. It can take up to
a minute for the profile to be deployed. If you don't see the new profile listed in Monitoring profile combo
box, select the refresh button and it should appear once the deployment is completed. Once you've selected the
new profile, select the Manage profile tab to add a monitoring machine that will be associated with the profile.
Add monitoring machine
Select Add monitoring machine to open a context panel to choose the virtual machine from which to monitor
your SQL instances and provide the connection strings.
Select the subscription and name of your monitoring virtual machine. If you're using Key Vault to store your
password for the monitoring user, select the Key Vault resources with these secrets and enter the URI and secret
name for the password to be used in the connection strings. See the next section for details on identifying the
connection string for different SQL deployments.

Add connection strings


The connection string specifies the login name that SQL Insights (preview) should use when logging into SQL to
collect monitoring data. If you're using a Key Vault to store the password for your monitoring user, provide the
Key Vault URI and name of the secret that contains the password.
The connections string will vary for each type of SQL resource:
Azure SQL Database
TCP connections from the monitoring machine to the IP address and port used by the database must be allowed
by any firewalls or network security groups (NSGs) that may exist on the network path. For details on IP
addresses and ports, see Azure SQL Database connectivity architecture.
Enter the connection string in the form:

sqlAzureConnections": [
"Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User
Id=$username;Password=$password;"
]

Get the details from the Connection strings page and the appropriate ADO.NET endpoint for the database.
To monitor a readable secondary, append ;ApplicationIntent=ReadOnly to the connection string. SQL Insights
supports monitoring a single secondary. The collected data will be tagged to reflect primary or secondary.
Azure SQL Managed Instance
TCP connections from the monitoring machine to the IP address and port used by the managed instance must
be allowed by any firewalls or network security groups (NSGs) that may exist on the network path. For details
on IP addresses and ports, see Azure SQL Managed Instance connection types.
Enter the connection string in the form:
"sqlManagedInstanceConnections": [
"Server= mysqlserver.<dns_zone>.database.windows.net;Port=1433;User Id=$username;Password=$password;"
]

Get the details from the Connection strings page and the appropriate ADO.NET endpoint for the managed
instance. If using managed instance public endpoint, replace port 1433 with 3342.
To monitor a readable secondary, append ;ApplicationIntent=ReadOnly to the connection string. SQL Insights
supports monitoring of a single secondary. Collected data will be tagged to reflect Primary or Secondary.
SQL Server
The TCP/IP protocol must be enabled for the SQL Server instance you want to monitor. TCP connections from
the monitoring machine to the IP address and port used by the SQL Server instance must be allowed by any
firewalls or network security groups (NSGs) that may exist on the network path.
If you want to monitor SQL Server configured for high availability (using either availability groups or failover
cluster instances), we recommend monitoring each SQL Server instance in the cluster individually rather than
connecting via an availability group listener or a failover cluster name. This ensures that monitoring data is
collected regardless of the current instance role (primary or secondary).
Enter the connection string in the form:

"sqlVmConnections": [
"Server=SQLServerInstanceIPAddress;Port=1433;User Id=$username;Password=$password;"
]

Use the IP address that the SQL Server instance listens on.
If your SQL Server instance is configured to listen on a non-default port, replace 1433 with that port number in
the connection string. If you're using Azure SQL virtual machine, you can see which port to use on the Security
page for the resource.

For any SQL Server instance, you can determine all IP addresses and ports it is listening on by connecting to the
instance and executing the following T-SQL query, as long as there is at least one TCP connection to the instance:

SELECT DISTINCT local_net_address, local_tcp_port


FROM sys.dm_exec_connections
WHERE net_transport = 'TCP'
AND
protocol_type = 'TSQL';

Monitoring profile created


Select Add monitoring vir tual machine to configure the virtual machine to collect data from your SQL
resources. Do not return to the Over view tab. In a few minutes, the Status column should change to read
"Collecting", you should see data for the SQL resources you have chosen to monitor.
If you do not see data, see Troubleshooting SQL Insights (preview) to identify the issue.

NOTE
If you need to update your monitoring profile or the connection strings on your monitoring VMs, you may do so via the
SQL Insights (preview) Manage profile tab. Once your updates have been saved the changes will be applied in
approximately 5 minutes.

Next steps
See Troubleshooting SQL Insights (preview) if SQL Insights isn't working properly after being enabled.
Create alerts with SQL Insights (preview)
7/12/2022 • 2 minutes to read • Edit Online

SQL Insights (preview) includes a set of alert rule templates you can use to create alert rules in Azure Monitor
for common SQL issues. The alert rules in SQL Insights (preview) are log alert rules based on performance data
stored in the InsightsMetrics table in Azure Monitor Logs.

NOTE
To create an alert for SQL Insights (preview) using a resource manager template, see Resource Manager template samples
for SQL Insights (preview).

NOTE
If you have requests for more SQL Insights (preview) alert rule templates, please send feedback using the link at the
bottom of this page or using the SQL Insights (preview) feedback link in the Azure portal.

Enable alert rules


Use the following steps to enable the alerts in Azure Monitor from the Azure portal.The alert rules that are
created will be scoped to all of the SQL resources monitored under the selected monitoring profile. When an
alert rule is triggered, it will trigger on the specific SQL instance or database.

NOTE
You can also create custom log alert rules by running queries on the data sets in the InsightsMetrics table and then
saving those queries as an alert rule.

Select SQL (preview) from the Insights section of the Azure Monitor menu in the Azure portal. Click Aler ts

The Aler ts pane opens on the right side of the page. By default, it will display fired alerts for SQL resources in
the selected monitoring profile based on the alert rules you've already created. Select Aler t templates , which
will display the list of available templates you can use to create an alert rule.
On the Create Aler t rule page, review the default settings for the rule and edit them as needed. You can also
select an action group to create notifications and actions when the alert rule is triggered. Click Enable aler t
rule to create the alert rule once you've verified all of its properties.

To deploy the alert rule immediately, click Deploy aler t rule . Click View Template if you want to view the rule
template before actually deploying it.

If you choose to view the templates, select Deploy from the template page to create the alert rule.
Next steps
Learn more about alerts in Azure Monitor.
Troubleshoot SQL Insights (preview)
7/12/2022 • 6 minutes to read • Edit Online

To troubleshoot data collection issues in SQL Insights (preview), check the status of the monitoring machine on
the Manage profile tab. The statuses are:
Collecting
Not collecting
Collecting with errors
Select the status to see logs and more details that might help you resolve the problem.

Status: Not collecting


The monitoring machine has a status of Not collecting if there's no data in InsightsMetrics for SQL in the last
10 minutes.

NOTE
Make sure that you're trying to collect data from a supported version of SQL. For example, trying to collect data with a
valid profile and connection string but from an unsupported version of Azure SQL Database will result in a Not
collecting status.

SQL Insights (preview) uses the following query to retrieve this information:

InsightsMetrics
| extend Tags = todynamic(Tags)
| extend SqlInstance = tostring(Tags.sql_instance)
| where TimeGenerated > ago(10m) and isnotempty(SqlInstance) and Namespace ==
'sqlserver_server_properties' and Name == 'uptime'

Check if any logs from Telegraf help identify the root cause the problem. If there are log entries, you can select
Not collecting and check the logs and troubleshooting info for common problems.
If there are no log entries, check the logs on the monitoring virtual machine for the following services installed
by two virtual machine extensions:
Microsoft.Azure.Monitor.AzureMonitorLinuxAgent
Service: mdsd
Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension
Service: wli
Service: ms-telegraf
Service: td-agent-bit-wli
Extension log to check installation failures:
/var/log/azure/Microsoft.Azure.Monitor.Workloads.Workload.WLILinuxExtension/wlilogs.log
wli service logs
Service logs: /var/log/wli.log
To see recent logs: tail -n 100 -f /var/log/wli.log

If you see the following error log, there's a problem with the mdsd service.

2021-01-27T06:09:28Z [Error] Failed to get config data. Error message: dial unix
/var/run/mdsd/default_fluent.socket: connect: no such file or directory

Telegraf service logs


Service logs: /var/log/ms-telegraf/telegraf.log
To see recent logs: tail -n 100 -f /var/log/ms-telegraf/telegraf.log

To see recent error and warning logs: tail -n 1000 /var/log/ms-telegraf/telegraf.log | grep "E\!\|W!"

The configuration that telegraf uses is generated by the wli service and placed in:
/etc/ms-telegraf/telegraf.d/wli

If a bad configuration is generated, the ms-telegraf service might fail to start. Check if the ms-telegraf service is
running by using this command: service ms-telegraf status
To see error messages from the telegraf service, run it manually by using the following command:

/usr/bin/ms-telegraf --config /etc/ms-telegraf/telegraf.conf --config-directory /etc/ms-


telegraf/telegraf.d/wli --test

mdsd service logs


Check prerequisites for the Azure Monitor agent.
Prior to Azure Monitoring Agent v1.12, mdsd service logs were located in:
/var/log/mdsd.err
/var/log/mdsd.warn
/var/log/mdsd.info

From v1.12 onward, service logs are located in:


/var/opt/microsoft/azuremonitoragent/log/
/etc/opt/microsoft/azuremonitoragent/

To see recent errors: tail -n 100 -f /var/log/mdsd.err


If you need to contact support, collect the following information:
Logs in /var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/
Log in /var/log/waagent.log
Logs in /var/log/mdsd* , or logs in /var/opt/microsoft/azuremonitoragent/log/ and
/etc/opt/microsoft/azuremonitoragent/ .
Files in /etc/mdsd.d/
File /etc/default/mdsd
Invalid monitoring virtual machine configuration
One cause of the Not collecting status is an invalid configuration for the monitoring virtual machine. Here's
the simplest form of configuration:

{
"version": 1,
"secrets": {
"telegrafPassword": {
"keyvault": "https://mykeyvault.vault.azure.net/",
"name": "sqlPassword"
}
},
"parameters": {
"sqlAzureConnections": [
"Server=mysqlserver.database.windows.net;Port=1433;Database=mydatabase;User
Id=telegraf;Password=$telegrafPassword;"
],
"sqlVmConnections": [
],
"sqlManagedInstanceConnections": [
]
}
}

This configuration specifies the replacement tokens to be used in the profile configuration on your monitoring
virtual machine. It also allows you to reference secrets from Azure Key Vault, so you don't have to keep secret
values in any configuration (which we strongly recommend).
In this configuration, the database connection string includes a $telegrafPassword replacement token. SQL
Insights replaces this token by the SQL authentication password retrieved from Key Vault. The Key Vault URI is
specified in the telegrafPassword configuration section under secrets .
Secrets
Secrets are tokens whose values are retrieved at runtime from an Azure key vault. A secret is defined by a value
pair that includes key vault URI and a secret name. This definition allows SQL Insights to get the value of the
secret at runtime and use it in downstream configuration.
You can define as many secrets as needed, including secrets stored in multiple key vaults.

"secrets": {
"<secret-token-name-1>": {
"keyvault": "<key-vault-uri>",
"name": "<key-vault-secret-name>"
},
"<secret-token-name-2>": {
"keyvault": "<key-vault-uri-2>",
"name": "<key-vault-secret-name-2>"
}
}
The permission to access the key vault is provided to a managed identity on the monitoring virtual machine.
This managed identity must be granted the Get permission on all Key Vault secrets referenced in the monitoring
profile configuration. This can be done from the Azure portal, PowerShell, the Azure CLI, or an Azure Resource
Manager template.
Parameters
Parameters are tokens that can be referenced in the profile configuration via JSON templates. Parameters have a
name and a value. Values can be any JSON type, including objects and arrays. A parameter is referenced in the
profile configuration by its name, using this convention: .Parameters.<name> .
Parameters can reference secrets in Key Vault by using the same convention. For example, sqlAzureConnections
references the secret telegrafPassword by using the convention $telegrafPassword .
At runtime, all parameters and secrets will be resolved and merged with the profile configuration to construct
the actual configuration to be used on the machine.

NOTE
The parameter names of sqlAzureConnections , sqlVmConnections , and sqlManagedInstanceConnections are all
required in configuration, even if you don't provide connection strings for some of them.

Status: Collecting with errors


The monitoring machine will have the status Collecting with errors if there's at least one recent
InsightsMetrics log but there are also errors in the Operation table.
SQL Insights uses the following queries to retrieve this information:

InsightsMetrics
|extendTags=todynamic(Tags)
|extendSqlInstance=tostring(Tags.sql_instance)
|whereTimeGenerated>ago(240m)andisnotempty(SqlInstance)andNamespace=='sqlserver_server_properties'andName=='
uptime'

WorkloadDiagnosticLogs
| summarizeErrors=countif(Status=='Error')

NOTE
If you don't see any data in WorkloadDiagnosticLogs , you might need to update your monitoring profile. From within
SQL Insights in Azure portal, select Manage profile > Edit profile > Update monitoring profile .

For common cases, we provide troubleshooting tips in our logs view:


Known issues
During preview of SQL Insights, you may encounter the following known issues.
'Login failed' error connecting to ser ver or database . Using certain special characters in SQL
authentication passwords saved in the monitoring VM configuration or in Key Vault may prevent the
monitoring VM from connecting to a SQL server or database. This set of characters includes parentheses,
square and curly brackets, the dollar sign, forward and back slashes, and dot ( [ { ( ) } ] $ \ / . ).
Spaces in the database connection string attributes may be replaced with special characters, leading to
database connection failures. For example, if the space in the User Id attribute is replaced with a special
character, connections will fail with the Login failed for user '' error. To resolve, edit the monitoring profile
configuration, and delete every special character appearing in place of a space. Some special characters may
look indistinguishable from a space, thus you may want to delete every space character, type it again, and
save the configuration.
Data collection and visualization may not work if the OS computer name of the monitoring VM is different
from the monitoring VM name.

Best practices
Ensure access to Key Vault from the monitoring VM . If you use Key Vault to store SQL
authentication passwords (strongly recommended), you need to ensure that network and security
configuration allows the monitoring VM to access Key Vault. For more information, see Access Azure Key
Vault behind a firewall and Configure Azure Key Vault networking settings. To verify that the monitoring
VM can access Key Vault, you can execute the following commands from an SSH session connected to the
VM. You should be able to successfully retrieve the access token and the secret. Replace
[YOUR-KEY-VAULT-URL] , [YOUR-KEY-VAULT-SECRET] , and [YOUR-KEY-VAULT-ACCESS-TOKEN] with actual values.

# Get an access token for accessing Key Vault secrets


curl 'http://[YOUR-KEY-VAULT-URL]/metadata/identity/oauth2/token?api-version=2018-02-
01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true

# Get Key Vault secret


curl 'https://[YOUR-KEY-VAULT-URL]/secrets/[YOUR-KEY-VAULT-SECRET]?api-version=2016-10-01' -H
"Authorization: Bearer [YOUR-KEY-VAULT-ACCESS-TOKEN]"

Update software on the monitoring VM . We strongly recommend periodically updating the


operating system and extensions on the monitoring VM. If an extension supports automatic upgrade,
enable that option.
Save previous configurations . If you want to make changes to either monitoring profile or monitoring
VM configuration, we recommend saving a working copy of your configuration data first. From the SQL
Insights page in Azure portal, select Manage profile > Edit profile , and copy the text from Current
Monitoring Profile Config to a file. Similarly, select Manage profile > Configure for the monitoring
VM, and copy the text from Current monitoring configuration to a file. If data collection errors occur
after configuration changes, you can compare the new configuration to the known working configuration
using a text diff tool to help you find any changes that might have impacted collection.

Next steps
Get details on enabling SQL Insights (preview).
Automatic tuning in Azure SQL Database and
Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance automatic tuning provides peak performance and stable
workloads through continuous performance tuning based on AI and machine learning.
Automatic tuning is a fully managed intelligent performance service that uses built-in intelligence to
continuously monitor queries executed on a database and automatically improve their performance. This is
achieved through dynamically adapting a database to changing workloads and applying tuning
recommendations. Automatic tuning learns horizontally from all databases on Azure through AI, and
dynamically improves its tuning actions. The longer a database runs with automatic tuning on, the better it
performs.
Azure SQL Database and Azure SQL Managed Instance automatic tuning might be one of the most impactful
features that you can enable to provide stable and peak performing database workloads.

What can automatic tuning do for you


Automated performance tuning of databases
Automated verification of performance gains
Automated rollback and self-correction
Tuning history
Tuning action Transact-SQL (T-SQL) scripts for manual deployments
Scale out capability on hundreds of thousands of databases
Positive impact to DevOps resources and the total cost of ownership

Safe, reliable, and proven


Tuning operations applied to databases are fully safe for performance of your most intense workloads. The
system has been designed with care not to interfere with user workloads. Automated tuning recommendations
are applied only at the times of a low utilization of CPU, Data IO, and Log IO. The system can also temporarily
disable automatic tuning operations to protect workload performance. In such case, "Disabled by the system"
message will be shown in Azure portal and in sys.database_automatic_tuning_options DMV. Automatic tuning is
designed to give user workloads the highest resource priority.
Automatic tuning mechanisms are mature and have been perfected on several million databases running on
Azure. Automated tuning operations applied are verified automatically to ensure there is a notable positive
improvement to workload performance. If there is no improvement, or in the unlikely case performance
regresses, changes made by automatic tuning are promptly reverted. Through the tuning history recorded,
there exists a clear trace of tuning improvements made to each database in Azure SQL Database.
Azure SQL automatic tuning shares its core logic with the SQL Server automatic tuning feature in the database
engine. For additional technical information on the built-in intelligence mechanism, see SQL Server automatic
tuning.

Enable automatic tuning


You enable automatic tuning for Azure SQL Database in the Azure portal or by using the ALTER DATABASE T-
SQL statement.
You enable automatic tuning for Azure SQL Managed Instance by using the ALTER DATABASE T-SQL
statement.

Automatic tuning options


The automatic tuning options available in Azure SQL Database and Azure SQL Managed Instance are:

SIN GL E DATA B A SE A N D
A UTO M AT IC T UN IN G P O O L ED DATA B A SE IN STA N C E DATA B A SE
O P T IO N DESC RIP T IO N SUP P O RT SUP P O RT
SIN GL E DATA B A SE A N D
A UTO M AT IC T UN IN G P O O L ED DATA B A SE IN STA N C E DATA B A SE
O P T IO N DESC RIP T IO N SUP P O RT SUP P O RT

CREATE INDEX Identifies indexes that may Yes No


improve performance of
your workload, creates
indexes, and automatically
verifies that performance of
queries has improved.
When recommending a new
index, the system considers
space available in the
database. If index addition
is estimated to increase
space utilization to over
90% toward maximum data
size, index recommendation
is not generated. Once the
system identifies a period of
low utilization and starts to
create an index, it will not
pause or cancel this
operation even if resource
utilization unexpectedly
increases. If index creation
fails, it will be retried during
a future period of low
utilization.

DROP INDEX Drops unused (over the last Yes No


90 days) and duplicate
indexes. Unique indexes,
including indexes
supporting primary key and
unique constraints, are
never dropped. This option
may be automatically
disabled when queries with
index hints are present in
the workload, or when the
workload performs partition
switching. On Premium and
Business Critical service
tiers, this option will never
drop unused indexes, but
will drop duplicate indexes,
if any.

FORCE L AST GOOD Identifies Azure SQL queries Yes Yes


PL AN (automatic plan using an execution plan
correction) that is slower than the
previous good plan, and
forces queries to use the
last known good plan
instead of the regressed
plan.

Automatic tuning for SQL Database


Automatic tuning for Azure SQL Database uses the CREATE INDEX , DROP INDEX , and FORCE L AST GOOD
PL AN database advisor recommendations to optimize your database performance. For more information, see
Database advisor recommendations in the Azure portal, in PowerShell, and in the REST API.
You can either manually apply tuning recommendations using the Azure portal, or you can let automatic tuning
autonomously apply tuning recommendations for you. The benefits of letting the system autonomously apply
tuning recommendations for you is that it automatically validates there exists a positive gain to workload
performance, and if there is no significant performance improvement detected or if performance regresses, the
system automatically reverts the changes that were made. Depending on query execution frequency, the
validation process can take from 30 minutes to 72 hours, taking longer for less frequently executing queries. If
at any point during validation a regression is detected, changes are reverted immediately.

IMPORTANT
In case you are applying tuning recommendations through T-SQL, the automatic performance validation and reversal
mechanisms are not available. Recommendations applied in such way will remain active and shown in the list of tuning
recommendations for 24-48 hours before the system automatically withdraws them. If you would like to remove a
recommendation sooner, you can discard it from Azure portal.

Automatic tuning options can be independently enabled or disabled for each database, or they can be
configured at the server-level and applied on every database that inherits settings from the server. By default,
new servers inherit Azure defaults for automatic tuning settings. Azure defaults are set to
FORCE_LAST_GOOD_PLAN enabled, CREATE_INDEX disabled, and DROP_INDEX disabled.
Configuring automatic tuning options on a server and inheriting settings for databases belonging to the parent
server is the recommended method for configuring automatic tuning. It simplifies management of automatic
tuning options for a large number of databases.
To learn about building email notifications for automatic tuning recommendations, see Email notifications for
automatic tuning.
Automatic tuning for Azure SQL Managed Instance
Automatic tuning for SQL Managed Instance only supports FORCE L AST GOOD PL AN . For more information
about configuring automatic tuning options through T-SQL, see Automatic tuning introduces automatic plan
correction and Automatic plan correction.

Automatic tuning history


For Azure SQL Database, the history of changes made by automatic tuning is retained for 21 days. It can be
viewed in Azure portal on the Performance recommendations page for a database, or using PowerShell with the
Get-AzSqlDatabaseRecommendedAction cmdlet. For longer retention, history data can also be streamed to
several types of destinations by enabling the AutomaticTuning diagnostic setting.

Next steps
Read the blog post Artificial Intelligence tunes Azure SQL Database.
Learn how automatic tuning works under the hood in Automatically indexing millions of databases in
Microsoft Azure SQL Database.
Learn how automatic tuning can proactively help you Diagnose and troubleshoot high CPU on Azure SQL
Database
Optimize performance by using in-memory
technologies in Azure SQL Database and Azure
SQL Managed Instance
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In-memory technologies enable you to improve performance of your application, and potentially reduce cost of
your database.

When to use in-memory technologies


By using in-memory technologies, you can achieve performance improvements with various workloads:
Transactional (online transactional processing (OLTP)) where most of the requests read or update smaller
set of data (for example, CRUD operations).
Analytic (online analytical processing (OLAP)) where most of the queries have complex calculations for the
reporting purposes, with a certain number of queries that load and append data to the existing tables (so
called bulk-load), or delete the data from the tables.
Mixed (hybrid transaction/analytical processing (HTAP)) where both OLTP and OLAP queries are executed on
the same set of data.
In-memory technologies can improve performance of these workloads by keeping the data that should be
processed into the memory, using native compilation of the queries, or advanced processing such as batch
processing and SIMD instructions that are available on the underlying hardware.

Overview
Azure SQL Database and Azure SQL Managed Instance have the following in-memory technologies:
In-Memory OLTP increases number of transactions per second and reduces latency for transaction
processing. Scenarios that benefit from In-Memory OLTP are: high-throughput transaction processing such
as trading and gaming, data ingestion from events or IoT devices, caching, data load, and temporary table
and table variable scenarios.
Clustered columnstore indexes reduce your storage footprint (up to 10 times) and improve performance for
reporting and analytics queries. You can use it with fact tables in your data marts to fit more data in your
database and improve performance. Also, you can use it with historical data in your operational database to
archive and be able to query up to 10 times more data.
Nonclustered columnstore indexes for HTAP help you to gain real-time insights into your business through
querying the operational database directly, without the need to run an expensive extract, transform, and load
(ETL) process and wait for the data warehouse to be populated. Nonclustered columnstore indexes allow fast
execution of analytics queries on the OLTP database, while reducing the impact on the operational workload.
Memory-optimized clustered columnstore indexes for HTAP enables you to perform fast transaction
processing, and to concurrently run analytics queries very quickly on the same data.
Both columnstore indexes and In-Memory OLTP have been part of the SQL Server product since 2012 and
2014, respectively. Azure SQL Database, Azure SQL Managed Instance, and SQL Server share the same
implementation of in-memory technologies.
Benefits of in-memory technology
Because of the more efficient query and transaction processing, in-memory technologies also help you to
reduce cost. You typically don't need to upgrade the pricing tier of the database to achieve performance gains. In
some cases, you might even be able reduce the pricing tier, while still seeing performance improvements with
in-memory technologies.
By using In-Memory OLTP, Quorum Business Solutions was able to double their workload while improving DTUs
by 70%. For more information, see the blog post: In-Memory OLTP.

NOTE
In-memory technologies are available in the Premium and Business Critical tiers.

This article describes aspects of In-Memory OLTP and columnstore indexes that are specific to Azure SQL
Database and Azure SQL Managed Instance, and also includes samples:
You'll see the impact of these technologies on storage and data size limits.
You'll see how to manage the movement of databases that use these technologies between the different
pricing tiers.
You'll see two samples that illustrate the use of In-Memory OLTP, as well as columnstore indexes.
For more information about in-memory in SQL Server, see:
In-Memory OLTP Overview and Usage Scenarios (includes references to customer case studies and
information to get started)
Documentation for In-Memory OLTP
Columnstore Indexes Guide
Hybrid transactional/analytical processing (HTAP), also known as real-time operational analytics

In-Memory OLTP
In-Memory OLTP technology provides extremely fast data access operations by keeping all data in memory. It
also uses specialized indexes, native compilation of queries, and latch-free data-access to improve performance
of the OLTP workload. There are two ways to organize your In-Memory OLTP data:
Memor y-optimized rowstore format where every row is a separate memory object. This is a classic
In-Memory OLTP format optimized for high-performance OLTP workloads. There are two types of
memory-optimized tables that can be used in the memory-optimized rowstore format:
Durable tables (SCHEMA_AND_DATA) where the rows placed in memory are preserved after server
restart. This type of tables behaves like a traditional rowstore table with the additional benefits of in-
memory optimizations.
Non-durable tables (SCHEMA_ONLY) where the rows are not-preserved after restart. This type of
table is designed for temporary data (for example, replacement of temp tables), or tables where you
need to quickly load data before you move it to some persisted table (so called staging tables).
Memor y-optimized columnstore format where data is organized in a columnar format. This structure
is designed for HTAP scenarios where you need to run analytic queries on the same data structure where
your OLTP workload is running.
NOTE
In-Memory OLTP technology is designed for the data structures that can fully reside in memory. Since the In-memory
data cannot be offloaded to disk, make sure that you are using database that has enough memory. See Data size and
storage cap for In-Memory OLTP for more details.

A quick primer on In-Memory OLTP: Quickstart 1: In-Memory OLTP Technologies for Faster T-SQL
Performance.
There is a programmatic way to understand whether a given database supports In-Memory OLTP. You can
execute the following Transact-SQL query:

SELECT DatabasePropertyEx(DB_NAME(), 'IsXTPSupported');

If the query returns 1 , In-Memory OLTP is supported in this database. The following queries identify all objects
that need to be removed before a database can be downgraded to General Purpose, Standard, or Basic:

SELECT * FROM sys.tables WHERE is_memory_optimized=1


SELECT * FROM sys.table_types WHERE is_memory_optimized=1
SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1

Data size and storage cap for In-Memory OLTP


In-Memory OLTP includes memory-optimized tables, which are used for storing user data. These tables are
required to fit in memory. Because you manage memory directly in SQL Database, we have the concept of a
quota for user data. This idea is referred to as In-Memory OLTP storage.
Each supported single database pricing tier and each elastic pool pricing tier includes a certain amount of In-
Memory OLTP storage.
DTU-based resource limits - single database
DTU-based resource limits - elastic pools
vCore-based resource limits - single databases
vCore-based resource limits - elastic pools
vCore-based resource limits - managed instance
The following items count toward your In-Memory OLTP storage cap:
Active user data rows in memory-optimized tables and table variables. Note that old row versions don't
count toward the cap.
Indexes on memory-optimized tables.
Operational overhead of ALTER TABLE operations.
If you hit the cap, you receive an out-of-quota error, and you are no longer able to insert or update data. To
mitigate this error, delete data or increase the pricing tier of the database or pool.
For details about monitoring In-Memory OLTP storage utilization and configuring alerts when you almost hit
the cap, see Monitor in-memory storage.
About elastic pools
With elastic pools, the In-Memory OLTP storage is shared across all databases in the pool. Therefore, the usage
in one database can potentially affect other databases. Two mitigations for this are:
Configure a Max-eDTU or MaxvCore for databases that is lower than the eDTU or vCore count for the pool as
a whole. This maximum caps the In-Memory OLTP storage utilization, in any database in the pool, to the size
that corresponds to the eDTU count.
Configure a Min-eDTU or MinvCore that is greater than 0. This minimum guarantees that each database in
the pool has the amount of available In-Memory OLTP storage that corresponds to the configured Min-eDTU
or vCore .
Changing service tiers of databases that use In-Memory OLTP technologies
You can always upgrade your database or instance to a higher tier, such as from General Purpose to Business
Critical (or Standard to Premium). The available functionality and resources only increase.
But downgrading the tier can negatively impact your database. The impact is especially apparent when you
downgrade from Business Critical to General Purpose (or Premium to Standard or Basic) when your database
contains In-Memory OLTP objects. Memory-optimized tables are unavailable after the downgrade (even if they
remain visible). The same considerations apply when you're lowering the pricing tier of an elastic pool, or
moving a database with in-memory technologies, into a General Purpose, Standard, or Basic elastic pool.

IMPORTANT
In-Memory OLTP isn't supported in the General Purpose, Standard or Basic tier. Therefore, it isn't possible to move a
database that has any In-Memory OLTP objects to one of these tiers.

Before you downgrade the database to General Purpose, Standard, or Basic, remove all memory-optimized
tables and table types, as well as all natively compiled T-SQL modules.
Scaling-down resources in Business Critical tier: Data in memory-optimized tables must fit within the In-
Memory OLTP storage that is associated with the tier of the database or the managed instance, or it is available
in the elastic pool. If you try to scale-down the tier or move the database into a pool that doesn't have enough
available In-Memory OLTP storage, the operation fails.

In-memory columnstore
In-memory columnstore technology is enabling you to store and query a large amount of data in the tables.
Columnstore technology uses column-based data storage format and batch query processing to achieve gain
up to 10 times the query performance in OLAP workloads over traditional row-oriented storage. You can also
achieve gains up to 10 times the data compression over the uncompressed data size. There are two types of
columnstore models that you can use to organize your data:
Clustered columnstore where all data in the table is organized in the columnar format. In this model, all
rows in the table are placed in columnar format that highly compresses the data and enables you to execute
fast analytical queries and reports on the table. Depending on the nature of your data, the size of your data
might be decreased 10x-100x. Clustered columnstore model also enables fast ingestion of large amount of
data (bulk-load) since large batches of data greater than 100K rows are compressed before they are stored
on disk. This model is a good choice for the classic data warehouse scenarios.
Non-clustered columnstore where the data is stored in traditional rowstore table and there is an index in
the columnstore format that is used for the analytical queries. This model enables Hybrid Transactional-
Analytic Processing (HTAP): the ability to run performant real-time analytics on a transactional workload.
OLTP queries are executed on rowstore table that is optimized for accessing a small set of rows, while OLAP
queries are executed on columnstore index that is better choice for scans and analytics. The query optimizer
dynamically chooses rowstore or columnstore format based on the query. Non-clustered columnstore
indexes don't decrease the size of the data since original data-set is kept in the original rowstore table
without any change. However, the size of additional columnstore index should be in order of magnitude
smaller than the equivalent B-tree index.
NOTE
In-memory columnstore technology keeps only the data that is needed for processing in the memory, while the data that
cannot fit into the memory is stored on-disk. Therefore, the amount of data in in-memory columnstore structures can
exceed the amount of available memory.

In-depth video about the technology:


Columnstore Index: In-memory Analytics Videos from Ignite 2016
Data size and storage for columnstore indexes
Columnstore indexes aren't required to fit in memory. Therefore, the only cap on the size of the indexes is the
maximum overall database size, which is documented in the DTU-based purchasing model and vCore-based
purchasing model articles.
When you use clustered columnstore indexes, columnar compression is used for the base table storage. This
compression can significantly reduce the storage footprint of your user data, which means that you can fit more
data in the database. And the compression can be further increased with columnar archival compression. The
amount of compression that you can achieve depends on the nature of the data, but 10 times the compression
is not uncommon.
For example, if you have a database with a maximum size of 1 terabyte (TB) and you achieve 10 times the
compression by using columnstore indexes, you can fit a total of 10 TB of user data in the database.
When you use nonclustered columnstore indexes, the base table is still stored in the traditional rowstore format.
Therefore, the storage savings aren't as significant as with clustered columnstore indexes. However, if you're
replacing a number of traditional nonclustered indexes with a single columnstore index, you can still see an
overall savings in the storage footprint for the table.
Changing service tiers of databases containing Columnstore indexes
Downgrading single database to Basic or Standard might not be possible if your target tier is below S3.
Columnstore indexes are supported only on the Business Critical/Premium pricing tier and on the Standard tier,
S3 and above, and not on the Basic tier. When you downgrade your database to an unsupported tier or level,
your columnstore index becomes unavailable. The system maintains your columnstore index, but it never
leverages the index. If you later upgrade back to a supported tier or level, your columnstore index is
immediately ready to be leveraged again.
If you have a clustered columnstore index, the whole table becomes unavailable after the downgrade.
Therefore we recommend that you drop all clustered columnstore indexes before you downgrade your database
to an unsupported tier or level.

NOTE
SQL Managed Instance supports Columnstore indexes in all tiers.

Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP
Try in-memory features
Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP in
SQL Database
In-Memory OLTP Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns where
In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (in-memory optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Extended events in Azure SQL Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The feature set of extended events in Azure SQL Database is a robust subset of the features on SQL Server and
Azure SQL Managed Instance.
XEvents is an informal nickname that is sometimes used for 'extended events' in blogs and other informal
locations.
Additional information about extended events is available at:
Quick Start: Extended events in SQL Server
Extended Events

Prerequisites
This article assumes you already have some knowledge of:
Azure SQL Database
Extended events
The bulk of our documentation about extended events applies to SQL Server, Azure SQL Database, and
Azure SQL Managed Instance.
Prior exposure to the following items is helpful when choosing the Event File as the target:
Azure Storage service
Azure PowerShell with Azure Storage

Code samples
Related articles provide two code samples:
Ring Buffer target code for extended events in Azure SQL Database
Short simple Transact-SQL script.
We emphasize in the code sample article that, when you are done with a Ring Buffer target, you
should release its resources by executing an alter-drop
ALTER EVENT SESSION ... ON DATABASE DROP TARGET ...; statement. Later you can add another instance
of Ring Buffer by ALTER EVENT SESSION ... ON DATABASE ADD TARGET ... .
Event File target code for extended events in Azure SQL Database
Phase 1 is PowerShell to create an Azure Storage container.
Phase 2 is Transact-SQL that uses the Azure Storage container.

Transact-SQL differences
When you execute the CREATE EVENT SESSION command on SQL Server, you use the ON SERVER
clause. But on Azure SQL Database you use the ON DATABASE clause instead.
The ON DATABASE clause also applies to the ALTER EVENT SESSION and DROP EVENT SESSION
Transact-SQL commands.
A best practice is to include the event session option of STARTUP_STATE = ON in your CREATE EVENT
SESSION or ALTER EVENT SESSION statements.
The = ON value supports an automatic restart after a reconfiguration of the logical database due to a
failover.

New catalog views


The extended events feature is supported by several catalog views. Catalog views tell you about metadata or
definitions of user-created event sessions in the current database. The views do not return information about
instances of active event sessions.

N A M E O F C ATA LO G VIEW DESC RIP T IO N

sys.database_event_session_actions Returns a row for each action on each event of an event


session.

sys.database_event_session_events Returns a row for each event in an event session.

sys.database_event_session_fields Returns a row for each customize-able column that was


explicitly set on events and targets.

sys.database_event_session_targets Returns a row for each event target for an event session.

sys.database_event_sessions Returns a row for each event session in the database.

In Microsoft SQL Server, similar catalog views have names that include .server_ instead of .database_. The name
pattern is like sys.server_event_% .

New dynamic management views (DMVs)


Azure SQL Database has dynamic management views (DMVs) that support extended events. DMVs tell you
about active event sessions.

N A M E O F DM V DESC RIP T IO N

sys.dm_xe_database_session_event_actions Returns information about event session actions.

sys.dm_xe_database_session_events Returns information about session events.

sys.dm_xe_database_session_object_columns Shows the configuration values for objects that are bound to
a session.

sys.dm_xe_database_session_targets Returns information about session targets.

sys.dm_xe_database_sessions Returns a row for each event session that is scoped to the
current database.

In Microsoft SQL Server, similar catalog views are named without the _database portion of the name, such as:
sys.dm_xe_sessions instead of sys.dm_xe_database_sessions .
DMVs common to both
For extended events there are additional DMVs that are common to Azure SQL Database, Azure SQL Managed
Instance, and Microsoft SQL Server:
sys.dm_xe_map_values
sys.dm_xe_object_columns
sys.dm_xe_objects
sys.dm_xe_packages

Find the available extended events, actions, and targets


To obtain a list of the available events, actions, and target, use the sample query:

SELECT
o.object_type,
p.name AS [package_name],
o.name AS [db_object_name],
o.description AS [db_obj_description]
FROM
sys.dm_xe_objects AS o
INNER JOIN sys.dm_xe_packages AS p ON p.guid = o.package_guid
WHERE
o.object_type in
(
'action', 'event', 'target'
)
ORDER BY
o.object_type,
p.name,
o.name;

Targets for your Azure SQL Database event sessions


Here are targets that can capture results from your event sessions on Azure SQL Database:
Ring Buffer target - Briefly holds event data in memory.
Event Counter target - Counts all events that occur during an extended events session.
Event File target - Writes complete buffers to an Azure Storage container.
The Event Tracing for Windows (ETW) API is not available for extended events on Azure SQL Database.

Restrictions
There are a couple of security-related differences befitting the cloud environment of Azure SQL Database:
Extended events are founded on the single-tenant isolation model. An event session in one database cannot
access data or events from another database.
You cannot issue a CREATE EVENT SESSION statement in the context of the master database.

Permission model
You must have Control permission on the database to issue a CREATE EVENT SESSION statement. The database
owner (dbo) has Control permission.
Storage container authorizations
The SAS token you generate for your Azure Storage container must specify r wl for the permissions. The r wl
value provides the following permissions:
Read
Write
List

Performance considerations
There are scenarios where intensive use of extended events can accumulate more active memory than is healthy
for the overall system. Therefore Azure SQL Database dynamically sets and adjusts limits on the amount of
active memory that can be accumulated by an event session. Many factors go into the dynamic calculation.
There is a cap on memory available to XEvent sessions in Azure SQL Database:
In single Azure SQL Database in the DTU purchasing model, each database can use up to 128 MB. This is
raised to 256 MB only in the Premium tier.
In single Azure SQL Database in the vCore purchasing model, each database can use up to 128 MB.
In an elastic pool, individual databases are limited by the single database limits, and in total they cannot
exceed 512 MB.
If you receive an error message that says a memory maximum was enforced, some corrective actions you can
take are:
Run fewer concurrent event sessions.
Through your CREATE and ALTER statements for event sessions, reduce the amount of memory you specify
on the MAX_MEMORY clause.
Network latency
The Event File target might experience network latency or failures while persisting data to Azure Storage blobs.
Other events in Azure SQL Database might be delayed while they wait for the network communication to
complete. This delay can slow your workload.
To mitigate this performance risk, avoid setting the EVENT_RETENTION_MODE option to
NO_EVENT_LOSS in your event session definitions.

Related links
Azure Storage Cmdlets
Using Azure PowerShell with Azure Storage
How to use Blob storage from .NET
CREATE CREDENTIAL (Transact-SQL)
CREATE EVENT SESSION (Transact-SQL)
The Azure Service Updates webpage, narrowed by parameter to Azure SQL Database:
https://azure.microsoft.com/updates/?service=sql-database
Event File target code for extended events in Azure
SQL Database and SQL Managed Instance
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You want a complete code sample for a robust way to capture and report information for an extended event.
In Microsoft SQL Server, the Event File target is used to store event outputs into a local hard drive file. But local
storage is not available to Azure SQL Database or SQL Managed Instance. Instead, use Azure Blob Storage to
support the Event File target.
This article presents a two-phase code sample:
PowerShell, to create an Azure Storage container in the cloud.
Transact-SQL:
To assign the Azure Storage container to an Event File target.
To create and start the event session, and so on.

Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

An Azure account and subscription. You can sign up for a free trial.
Any database you can create a table in.
Optionally you can create an AdventureWorksLT demonstration database in minutes.
SQL Server Management Studio (ssms.exe), ideally its latest monthly update version: Download SQL
Server Management Studio
You must have the Azure PowerShell modules installed.
The modules provide commands, such as - New-AzStorageAccount .

Phase 1: PowerShell code for Azure Storage container


This PowerShell is phase 1 of the two-phase code sample.
The script starts with commands to clean up after a possible previous run, and is rerunnable.
1. Paste the PowerShell script into a simple text editor such as Notepad.exe, and save the script as a file with
the extension .ps1 .
2. Start PowerShell ISE as an Administrator.
3. At the prompt, type
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser
and then press Enter.
4. In PowerShell ISE, open your .ps1 file. Run the script.
5. The script first starts a new window in which you sign in to Azure.
If you rerun the script without disrupting your session, you have the convenient option of
commenting out the Add-AzureAccount command.

PowerShell code
This PowerShell script assumes you've already installed the Az module. For information, see Install the Azure
PowerShell module.

## TODO: Before running, find all 'TODO' and make each edit!!

cls;

#--------------- 1 -----------------------

'Script assumes you have already logged your PowerShell session into Azure.
But if not, run Connect-AzAccount (or Connect-AzAccount), just one time.';
#Connect-AzAccount; # Same as Connect-AzAccount.

#-------------- 2 ------------------------

'
TODO: Edit the values assigned to these variables, especially the first few!
';

# Ensure the current date is between


# the Expiry and Start time values that you edit here.

$subscriptionName = 'YOUR_SUBSCRIPTION_NAME';
$resourceGroupName = 'YOUR_RESOURCE-GROUP-NAME';
$policySasExpiryTime = '2018-08-28T23:44:56Z';
$policySasStartTime = '2017-10-01';

$storageAccountLocation = 'YOUR_STORAGE_ACCOUNT_LOCATION';
$storageAccountName = 'YOUR_STORAGE_ACCOUNT_NAME';
$containerName = 'YOUR_CONTAINER_NAME';
$policySasToken = ' ? ';

$policySasPermission = 'rwl'; # Leave this value alone, as 'rwl'.

#--------------- 3 -----------------------

# The ending display lists your Azure subscriptions.


# One should match the $subscriptionName value you assigned
# earlier in this PowerShell script.

'Choose an existing subscription for the current PowerShell environment.';

Select-AzSubscription -Subscription $subscriptionName;

#-------------- 4 ------------------------

'
Clean up the old Azure Storage Account after any previous run,
before continuing this new run.';

if ($storageAccountName) {
Remove-AzStorageAccount `
-Name $storageAccountName `
-ResourceGroupName $resourceGroupName;
}

#--------------- 5 -----------------------

[System.DateTime]::Now.ToString();

'
Create a storage account.
This might take several minutes, will beep when ready.
...PLEASE WAIT...';

New-AzStorageAccount `
-Name $storageAccountName `
-Location $storageAccountLocation `
-ResourceGroupName $resourceGroupName `
-SkuName 'Standard_LRS';

[System.DateTime]::Now.ToString();
[System.Media.SystemSounds]::Beep.Play();

'
Get the access key for your storage account.
';

$accessKey_ForStorageAccount = `
(Get-AzStorageAccountKey `
-Name $storageAccountName `
-ResourceGroupName $resourceGroupName
).Value[0];

"`$accessKey_ForStorageAccount = $accessKey_ForStorageAccount";

'Azure Storage Account cmdlet completed.


Remainder of PowerShell .ps1 script continues.
';

#--------------- 6 -----------------------

# The context will be needed to create a container within the storage account.
'Create a context object from the storage account and its primary access key.
';

$context = New-AzStorageContext `
-StorageAccountName $storageAccountName `
-StorageAccountKey $accessKey_ForStorageAccount;

'Create a container within the storage account.


';

$containerObjectInStorageAccount = New-AzStorageContainer `
-Name $containerName `
-Context $context;

'Create a security policy to be applied to the SAS token.


';

New-AzStorageContainerStoredAccessPolicy `
-Container $containerName `
-Context $context `
-Policy $policySasToken `
-Permission $policySasPermission `
-ExpiryTime $policySasExpiryTime `
-StartTime $policySasStartTime;

'
Generate a SAS token for the container.
';
try {
$sasTokenWithPolicy = New-AzStorageContainerSASToken `
-Name $containerName `
-Context $context `
-Policy $policySasToken;
}
catch {
$Error[0].Exception.ToString();
}

#-------------- 7 ------------------------

'Display the values that YOU must edit into the Transact-SQL script next!:
';

"storageAccountName: $storageAccountName";
"containerName: $containerName";
"sasTokenWithPolicy: $sasTokenWithPolicy";

'
REMINDER: sasTokenWithPolicy here might start with "?" character, which you must exclude from Transact-SQL.
';

'
(Later, return here to delete your Azure Storage account. See the preceding Remove-AzStorageAccount -Name
$storageAccountName)';

'
Now shift to the Transact-SQL portion of the two-part code sample!';

# EOFile

Take note of the few named values that the PowerShell script prints when it ends. You must edit those values
into the Transact-SQL script that follows as phase 2.
NOTE
In the preceding PowerShell code example, SQL extended events are not compatible with the ADLS Gen2 storage
accounts.

Phase 2: Transact-SQL code that uses Azure Storage container


In phase 1 of this code sample, you ran a PowerShell script to create an Azure Storage container.
Next in phase 2, the following Transact-SQL script must use the container.
The script starts with commands to clean up after a possible previous run, and is rerunnable.
The PowerShell script printed a few named values when it ended. You must edit the Transact-SQL script to use
those values. Find TODO in the Transact-SQL script to locate the edit points.
1. Open SQL Server Management Studio (ssms.exe).
2. Connect to your database in Azure SQL Database or SQL Managed Instance.
3. Select to open a new query pane.
4. Paste the following Transact-SQL script into the query pane.
5. Find every TODO in the script and make the appropriate edits.
6. Save, and then run the script.

WARNING
The SAS key value generated by the preceding PowerShell script might begin with a '?' (question mark). When you use the
SAS key in the following T-SQL script, you must remove the leading '?'. Otherwise your efforts might be blocked by
security.

Transact-SQL code

---- TODO: First, run the earlier PowerShell portion of this two-part code sample.
---- TODO: Second, find every 'TODO' in this Transact-SQL file, and edit each.

---- Transact-SQL code for Event File target on Azure SQL Database or SQL Managed Instance.

SET NOCOUNT ON;


GO

---- Step 1. Establish one little table, and ---------


---- insert one row of data.

IF EXISTS
(SELECT * FROM sys.objects
WHERE type = 'U' and name = 'gmTabEmployee')
BEGIN
DROP TABLE gmTabEmployee;
END
GO

CREATE TABLE gmTabEmployee


(
EmployeeGuid uniqueIdentifier not null default newid() primary key,
EmployeeId int not null identity(1,1),
EmployeeKudosCount int not null default 0,
EmployeeDescr nvarchar(256) null
);
GO

INSERT INTO gmTabEmployee ( EmployeeDescr )


VALUES ( 'Jane Doe' );
GO

------ Step 2. Create key, and ------------


------ Create credential (your Azure Storage container must already exist).

IF NOT EXISTS
(SELECT * FROM sys.symmetric_keys
WHERE symmetric_key_id = 101)
BEGIN
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '0C34C960-6621-4682-A123-C7EA08E3FC46' -- Or any newid().
END
GO

IF EXISTS
(SELECT * FROM sys.database_scoped_credentials
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
WHERE name = 'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent')
BEGIN
DROP DATABASE SCOPED CREDENTIAL
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
[https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent] ;
END
GO

CREATE
DATABASE SCOPED
CREDENTIAL
-- use '.blob.', and not '.queue.' or '.table.' etc.
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
[https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent]
WITH
IDENTITY = 'SHARED ACCESS SIGNATURE', -- "SAS" token.
-- TODO: Paste in the long SasToken string here for Secret, but exclude any leading '?'.
SECRET = 'sv=2014-02-14&sr=c&si=gmpolicysastoken&sig=EjAqjo6Nu5xMLEZEkMkLbeF7TD9v1J8DNB2t8gOKTts%3D'
;
GO

------ Step 3. Create (define) an event session. --------


------ The event session has an event with an action,
------ and a has a target.

IF EXISTS
(SELECT * from sys.database_event_sessions
WHERE name = 'gmeventsessionname240b')
BEGIN
DROP
EVENT SESSION
gmeventsessionname240b
ON DATABASE;
END
GO

CREATE
EVENT SESSION
gmeventsessionname240b
ON DATABASE

ADD EVENT
sqlserver.sql_statement_starting
(
ACTION (sqlserver.sql_text)
WHERE statement LIKE 'UPDATE gmTabEmployee%'
)
ADD TARGET
package0.event_file
(
-- TODO: Assign AzureStorageAccount name, and the associated Container name.
-- Also, tweak the .xel file name at end, if you like.
-- Also, tweak the .xel file name at end, if you like.
SET filename =

'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent/anyfilenamexel242b.xel'
)
WITH
(MAX_MEMORY = 10 MB,
MAX_DISPATCH_LATENCY = 3 SECONDS)
;
GO

------ Step 4. Start the event session. ----------------


------ Issue the SQL Update statements that will be traced.
------ Then stop the session.

------ Note: If the target fails to attach,


------ the session must be stopped and restarted.

ALTER
EVENT SESSION
gmeventsessionname240b
ON DATABASE
STATE = START;
GO

SELECT 'BEFORE_Updates', EmployeeKudosCount, * FROM gmTabEmployee;

UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe';

UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 13
WHERE EmployeeDescr = 'Jane Doe';

SELECT 'AFTER__Updates', EmployeeKudosCount, * FROM gmTabEmployee;


GO

ALTER
EVENT SESSION
gmeventsessionname240b
ON DATABASE
STATE = STOP;
GO

-------------- Step 5. Select the results. ----------

SELECT
*, 'CLICK_NEXT_CELL_TO_BROWSE_ITS_RESULTS!' as [CLICK_NEXT_CELL_TO_BROWSE_ITS_RESULTS],
CAST(event_data AS XML) AS [event_data_XML] -- TODO: In ssms.exe results grid, double-click this
cell!
FROM
sys.fn_xe_file_target_read_file
(
-- TODO: Fill in Storage Account name, and the associated Container name.
-- TODO: The name of the .xel file needs to be an exact match to the files in the storage
account Container (You can use Storage Account explorer from the portal to find out the exact file names or
you can retrieve the name using the following DMV-query: select target_data from
sys.dm_xe_database_session_targets. The 3rd xml-node, "File name", contains the name of the file currently
written to.)
'https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent/anyfilenamexel242b',
null, null, null
);
GO

-------------- Step 6. Clean up. ----------

DROP
EVENT SESSION
gmeventsessionname240b
gmeventsessionname240b
ON DATABASE;
GO

DROP DATABASE SCOPED CREDENTIAL


-- TODO: Assign AzureStorageAccount name, and the associated Container name.
[https://gmstorageaccountxevent.blob.core.windows.net/gmcontainerxevent]
;
GO

DROP TABLE gmTabEmployee;


GO

PRINT 'Use PowerShell Remove-AzStorageAccount to delete your Azure Storage account!';


GO

If the target fails to attach when you run, you must stop and restart the event session:

ALTER EVENT SESSION gmeventsessionname240b


ON DATABASE STATE = STOP;
GO
ALTER EVENT SESSION gmeventsessionname240b
ON DATABASE STATE = START;
GO

Output
When the Transact-SQL script completes, select a cell under the event_data_XML column header. One
<event> element is displayed which shows one UPDATE statement.
Here is one <event> element that was generated during testing:
<event name="sql_statement_starting" package="sqlserver" timestamp="2015-09-22T19:18:45.420Z">
<data name="state">
<value>0</value>
<text>Normal</text>
</data>
<data name="line_number">
<value>5</value>
</data>
<data name="offset">
<value>148</value>
</data>
<data name="offset_end">
<value>368</value>
</data>
<data name="statement">
<value>UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe'</value>
</data>
<action name="sql_text" package="sqlserver">
<value>

SELECT 'BEFORE_Updates', EmployeeKudosCount, * FROM gmTabEmployee;

UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 2
WHERE EmployeeDescr = 'Jane Doe';

UPDATE gmTabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 13
WHERE EmployeeDescr = 'Jane Doe';

SELECT 'AFTER__Updates', EmployeeKudosCount, * FROM gmTabEmployee;


</value>
</action>
</event>

The preceding Transact-SQL script used the following system function to read the event_file:
sys.fn_xe_file_target_read_file (Transact-SQL)
An explanation of advanced options for the viewing of data from extended events is available at:
Advanced Viewing of Target Data from Extended Events

Converting the code sample to run on SQL Server


Suppose you wanted to run the preceding Transact-SQL sample on Microsoft SQL Server.
For simplicity, you would want to completely replace use of the Azure Storage container with a simple file
such as C:\myeventdata.xel . The file would be written to the local hard drive of the computer that hosts
SQL Server.
You would not need any kind of Transact-SQL statements for CREATE MASTER KEY and CREATE
CREDENTIAL .
In the CREATE EVENT SESSION statement, in its ADD TARGET clause, you would replace the Http
value assigned made to filename= with a full path string like C:\myfile.xel .
An Azure Storage account is not needed.

Next steps
For more info about accounts and containers in the Azure Storage service, see:
How to use Blob storage from .NET
Naming and Referencing Containers, Blobs, and Metadata
Working with the Root Container
Lesson 1: Create a stored access policy and a shared access signature on an Azure container
Lesson 2: Create a SQL Server credential using a shared access signature
Extended Events for Microsoft SQL Server
Ring Buffer target code for extended events in
Azure SQL Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You want a complete code sample for the easiest quick way to capture and report information for an extended
event during a test. The easiest target for extended event data is the Ring Buffer target.
This topic presents a Transact-SQL code sample that:
1. Creates a table with data to demonstrate with.
2. Creates a session for an existing extended event, namely sqlser ver.sql_statement_star ting .
The event is limited to SQL statements that contain a particular Update string: statement LIKE
'%UPDATE tabEmployee%' .
Chooses to send the output of the event to a target of type Ring Buffer, namely
package0.ring_buffer .
3. Starts the event session.
4. Issues a couple of simple SQL UPDATE statements.
5. Issues a SQL SELECT statement to retrieve event output from the Ring Buffer.
sys.dm_xe_database_session_targets and other dynamic management views (DMVs) are joined.
6. Stops the event session.
7. Drops the Ring Buffer target, to release its resources.
8. Drops the event session and the demo table.

Prerequisites
An Azure account and subscription. You can sign up for a free trial.
Any database you can create a table in.
Optionally you can create an AdventureWorksLT demonstration database in minutes.
SQL Server Management Studio (ssms.exe), ideally its latest monthly update version. You can download
the latest ssms.exe from:
Topic titled Download SQL Server Management Studio.
A direct link to the download.

Code sample
With very minor modification, the following Ring Buffer code sample can be run on either Azure SQL Database
or Microsoft SQL Server. The difference is the presence of the node '_database' in the name of some dynamic
management views (DMVs), used in the FROM clause in Step 5. For example:
sys.dm_xe_database _session_targets
sys.dm_xe_session_targets
GO
---- Transact-SQL.
---- Step set 1.

SET NOCOUNT ON;


GO

IF EXISTS
(SELECT * FROM sys.objects
WHERE type = 'U' and name = 'tabEmployee')
BEGIN
DROP TABLE tabEmployee;
END
GO

CREATE TABLE tabEmployee


(
EmployeeGuid uniqueIdentifier not null default newid() primary key,
EmployeeId int not null identity(1,1),
EmployeeKudosCount int not null default 0,
EmployeeDescr nvarchar(256) null
);
GO

INSERT INTO tabEmployee ( EmployeeDescr )


VALUES ( 'Jane Doe' );
GO

---- Step set 2.

IF EXISTS
(SELECT * from sys.database_event_sessions
WHERE name = 'eventsession_gm_azuresqldb51')
BEGIN
DROP EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE;
END
GO

CREATE
EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE
ADD EVENT
sqlserver.sql_statement_starting
(
ACTION (sqlserver.sql_text)
WHERE statement LIKE '%UPDATE tabEmployee%'
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 500 -- Units of KB.
);
GO

---- Step set 3.

ALTER EVENT SESSION eventsession_gm_azuresqldb51


ON DATABASE
STATE = START;
STATE = START;
GO

---- Step set 4.

SELECT 'BEFORE_Updates', EmployeeKudosCount, * FROM tabEmployee;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;

SELECT 'AFTER__Updates', EmployeeKudosCount, * FROM tabEmployee;


GO

---- Step set 5.

SELECT
se.name AS [session-name],
ev.event_name,
ac.action_name,
st.target_name,
se.session_source,
st.target_data,
CAST(st.target_data AS XML) AS [target_data_XML]
FROM
sys.dm_xe_database_session_event_actions AS ac

INNER JOIN sys.dm_xe_database_session_events AS ev ON ev.event_name = ac.event_name


AND CAST(ev.event_session_address AS BINARY(8)) = CAST(ac.event_session_address AS BINARY(8))

INNER JOIN sys.dm_xe_database_session_object_columns AS oc


ON CAST(oc.event_session_address AS BINARY(8)) = CAST(ac.event_session_address AS BINARY(8))

INNER JOIN sys.dm_xe_database_session_targets AS st


ON CAST(st.event_session_address AS BINARY(8)) = CAST(ac.event_session_address AS BINARY(8))

INNER JOIN sys.dm_xe_database_sessions AS se


ON CAST(ac.event_session_address AS BINARY(8)) = CAST(se.address AS BINARY(8))
WHERE
oc.column_name = 'occurrence_number'
AND
se.name = 'eventsession_gm_azuresqldb51'
AND
ac.action_name = 'sql_text'
ORDER BY
se.name,
ev.event_name,
ac.action_name,
st.target_name,
se.session_source
;
GO

---- Step set 6.

ALTER EVENT SESSION eventsession_gm_azuresqldb51


ON DATABASE
STATE = STOP;
GO

---- Step set 7.

ALTER EVENT SESSION eventsession_gm_azuresqldb51


ON DATABASE
ON DATABASE
DROP TARGET package0.ring_buffer;
GO

---- Step set 8.

DROP EVENT SESSION eventsession_gm_azuresqldb51


ON DATABASE;
GO

DROP TABLE tabEmployee;


GO

Ring Buffer contents


We used ssms.exe to run the code sample.
To view the results, we clicked the cell under the column header target_data_XML .
Then in the results pane we clicked the cell under the column header target_data_XML . This click created
another file tab in ssms.exe in which the content of the result cell was displayed, as XML.
The output is shown in the following block. It looks long, but it is just two <event> elements.

<RingBufferTarget truncated="0" processingTime="0" totalEventsProcessed="2" eventCount="2" droppedCount="0"


memoryUsed="1728">
<event name="sql_statement_starting" package="sqlserver" timestamp="2015-09-22T15:29:31.317Z">
<data name="state">
<type name="statement_starting_state" package="sqlserver" />
<value>0</value>
<text>Normal</text>
</data>
<data name="line_number">
<type name="int32" package="package0" />
<value>7</value>
</data>
<data name="offset">
<type name="int32" package="package0" />
<value>184</value>
</data>
<data name="offset_end">
<type name="int32" package="package0" />
<value>328</value>
</data>
<data name="statement">
<type name="unicode_string" package="package0" />
<value>UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102</value>
</data>
<action name="sql_text" package="sqlserver">
<type name="unicode_string" package="package0" />
<value>
---- Step set 4.

SELECT 'BEFORE_Updates', EmployeeKudosCount, * FROM tabEmployee;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;
SET EmployeeKudosCount = EmployeeKudosCount + 1015;

SELECT 'AFTER__Updates', EmployeeKudosCount, * FROM tabEmployee;


</value>
</action>
</event>
<event name="sql_statement_starting" package="sqlserver" timestamp="2015-09-22T15:29:31.327Z">
<data name="state">
<type name="statement_starting_state" package="sqlserver" />
<value>0</value>
<text>Normal</text>
</data>
<data name="line_number">
<type name="int32" package="package0" />
<value>10</value>
</data>
<data name="offset">
<type name="int32" package="package0" />
<value>340</value>
</data>
<data name="offset_end">
<type name="int32" package="package0" />
<value>486</value>
</data>
<data name="statement">
<type name="unicode_string" package="package0" />
<value>UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015</value>
</data>
<action name="sql_text" package="sqlserver">
<type name="unicode_string" package="package0" />
<value>
---- Step set 4.

SELECT 'BEFORE_Updates', EmployeeKudosCount, * FROM tabEmployee;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 102;

UPDATE tabEmployee
SET EmployeeKudosCount = EmployeeKudosCount + 1015;

SELECT 'AFTER__Updates', EmployeeKudosCount, * FROM tabEmployee;


</value>
</action>
</event>
</RingBufferTarget>

Release resources held by your Ring Buffer


When you are done with your Ring Buffer, you can remove it and release its resources issuing an ALTER like the
following:

ALTER EVENT SESSION eventsession_gm_azuresqldb51


ON DATABASE
DROP TARGET package0.ring_buffer;
GO

The definition of your event session is updated, but not dropped. Later you can add another instance of the Ring
Buffer to your event session:
ALTER EVENT SESSION eventsession_gm_azuresqldb51
ON DATABASE
ADD TARGET
package0.ring_buffer
(SET
max_memory = 500 -- Units of KB.
);

More information
The primary topic for extended events on Azure SQL Database is:
Extended event considerations in Azure SQL Database, which contrasts some aspects of extended events that
differ between Azure SQL Database versus Microsoft SQL Server.
Other code sample topics for extended events are available at the following links. However, you must routinely
check any sample to see whether the sample targets Microsoft SQL Server versus Azure SQL Database. Then
you can decide whether minor changes are needed to run the sample.
Code sample for Azure SQL Database: Event File target code for extended events in Azure SQL Database
Quickstart: Use .NET and C# in Visual Studio to
connect to and query a database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This quickstart shows how to use the .NET Framework and C# code in Visual Studio to query a database in
Azure SQL or Synapse SQL with Transact-SQL statements.

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
Visual Studio 2019 Community, Professional, or Enterprise edition.
A database where you can run a query.
You can use one of these quickstarts to create and then configure a database:

SQ L M A N A GED SQ L SERVER O N A Z URE SY N A P SE


A C T IO N SQ L DATA B A SE IN STA N C E A Z URE VM A N A LY T IC S

Create Portal Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell PowerShell

Deployment Deployment
template template

Configure Server-level IP Connectivity from a


firewall rule VM

Connectivity from Connect to a SQL


on-premises Server instance

Get connection Azure SQL Azure SQL SQL VM Synapse SQL


information

Create code to query the database in Azure SQL Database


1. In Visual Studio, create a new project.
2. In the New Project dialog, select the Visual C# , Console App (.NET Framework) .
3. Enter sqltest for the project name, and then select OK . The new project is created.
4. Select Project > Manage NuGet Packages .
5. In NuGet Package Manager , select the Browse tab, then search for and select
Microsoft.Data.SqlClient .
6. On the Microsoft.Data.SqlClient page, select Install .
If prompted, select OK to continue with the installation.
If a License Acceptance window appears, select I Accept .
7. When the install completes, you can close NuGet Package Manager .
8. In the code editor, replace the Program.cs contents with the following code. Replace your values for
<your_server> , <your_username> , <your_password> , and <your_database> .

using System;
using Microsoft.Data.SqlClient;
using System.Text;

namespace sqltest
{
class Program
{
static void Main(string[] args)
{
try
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = "<your_server>.database.windows.net";
builder.UserID = "<your_username>";
builder.Password = "<your_password>";
builder.InitialCatalog = "<your_database>";

using (SqlConnection connection = new SqlConnection(builder.ConnectionString))


{
Console.WriteLine("\nQuery data example:");
Console.WriteLine("=========================================\n");

String sql = "SELECT name, collation_name FROM sys.databases";

using (SqlCommand command = new SqlCommand(sql, connection))


{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine("{0} {1}", reader.GetString(0),
reader.GetString(1));
}
}
}
}
}
catch (SqlException e)
{
Console.WriteLine(e.ToString());
}
Console.ReadLine();
}
}
}

Run the code


1. To run the app, select Debug > Star t Debugging , or select Star t on the toolbar, or press F5 .
2. Verify that the database names and collations are returned, and then close the app window.
Next steps
Learn how to connect and query a database in Azure SQL Database by using .NET Core on
Windows/Linux/macOS.
Learn about Getting started with .NET Core on Windows/Linux/macOS using the command line.
Learn how to Design your first database in Azure SQL Database by using SSMS or Design your first database
in Azure SQL Database by using .NET.
For more information about .NET, see .NET documentation.
Retry logic example: Connect resiliently to Azure SQL with ADO.NET.
Quickstart: Use .NET Core (C#) to query a database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this quickstart, you'll use .NET Core and C# code to connect to a database. You'll then run a Transact-SQL
statement to query data.

TIP
The following Microsoft Learn module helps you learn for free how to Develop and configure an ASP.NET application that
queries a database in Azure SQL Database

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
.NET Core SDK for your operating system installed.
A database where you can run your query.
You can use one of these quickstarts to create and then configure a database:

SQ L M A N A GED SQ L SERVER O N A Z URE SY N A P SE


A C T IO N SQ L DATA B A SE IN STA N C E A Z URE VM A N A LY T IC S

Create Portal Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell PowerShell

Deployment Deployment
template template

Configure Server-level IP Connectivity from a


firewall rule VM

Connectivity from Connect to a SQL


on-premises Server instance

Get connection Azure SQL Azure SQL SQL VM Synapse SQL


information

Create a new .NET Core project


1. Open a command prompt and create a folder named sqltest . Navigate to this folder and run this
command.
dotnet new console

This command creates new app project files, including an initial C# code file (Program.cs ), an XML
configuration file (sqltest.csproj ), and needed binaries.
2. In a text editor, open sqltest.csproj and paste the following XML between the <Project> tags. This XML
adds System.Data.SqlClient as a dependency.

<ItemGroup>
<PackageReference Include="System.Data.SqlClient" Version="4.6.0" />
</ItemGroup>

Insert code to query the database in Azure SQL Database


1. In a text editor, open Program.cs .
2. Replace the contents with the following code and add the appropriate values for your server, database,
username, and password.

NOTE
To use an ADO.NET connection string, replace the 4 lines in the code setting the server, database, username, and
password with the line below. In the string, set your username and password.
builder.ConnectionString="<your_ado_net_connection_string>";
using System;
using System.Data.SqlClient;
using System.Text;

namespace sqltest
{
class Program
{
static void Main(string[] args)
{
try
{
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();

builder.DataSource = "<your_server.database.windows.net>";
builder.UserID = "<your_username>";
builder.Password = "<your_password>";
builder.InitialCatalog = "<your_database>";

using (SqlConnection connection = new SqlConnection(builder.ConnectionString))


{
Console.WriteLine("\nQuery data example:");
Console.WriteLine("=========================================\n");

connection.Open();

String sql = "SELECT name, collation_name FROM sys.databases";

using (SqlCommand command = new SqlCommand(sql, connection))


{
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine("{0} {1}", reader.GetString(0), reader.GetString(1));
}
}
}
}
}
catch (SqlException e)
{
Console.WriteLine(e.ToString());
}
Console.WriteLine("\nDone. Press enter.");
Console.ReadLine();
}
}
}

Run the code


1. At the prompt, run the following commands.

dotnet restore
dotnet run

2. Verify that the rows are returned.


Query data example:
=========================================

master SQL_Latin1_General_CP1_CI_AS
tempdb SQL_Latin1_General_CP1_CI_AS
WideWorldImporters Latin1_General_100_CI_AS

Done. Press enter.

3. Choose Enter to close the application window.

Next steps
Getting started with .NET Core on Windows/Linux/macOS using the command line.
Learn how to connect and query Azure SQL Database or Azure SQL Managed Instance, by using the .NET
Framework and Visual Studio.
Learn how to Design your first database with SSMS or Design a database and connect with C# and ADO.NET.
For more information about .NET, see .NET documentation.
Quickstart: Use Golang to query a database in
Azure SQL Database or Azure SQL Managed
Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In this quickstart, you'll use the Golang programming language to connect to a database in Azure SQL Database
or Azure SQL Managed Instance with the [go-mssqldb]((https://github.com/microsoft/go-mssqldb). The sample
queries and modifies data with explicit Transact-SQL statements. Golang is an open-source programming
language that makes it easy to build simple, reliable, and efficient software.

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create a database:

SQ L SERVER O N A Z URE
SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM

Create Portal Portal Portal

Create CLI CLI

Create PowerShell PowerShell PowerShell

Configure Server-level IP firewall rule Connectivity from a VM

Configure Connectivity from on- Connect to a SQL Server


premises instance

Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers

Load data Restore or import Restore or import


Adventure Works from a Adventure Works from a
BACPAC file from GitHub BACPAC file from GitHub

IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.

Golang and related software for your operating system installed:


macOS : Install Homebrew and Golang. See Step 1.2.
Ubuntu : Install Golang. See Step 1.2.
Windows : Install Golang. See Step 1.2.

Get server connection information


Get the connection information you need to connect to the database. You'll need the fully qualified server name
or host name, database name, and login information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Navigate to the SQL Databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for a database in
Azure SQL Database or the fully qualified server name (or IP address) next to Host for an Azure SQL
Managed Instance or SQL Server on Azure VM. To copy the server name or host name, hover over it and
select the Copy icon.

NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.

Create Golang project and dependencies


1. From the terminal, create a new project folder called SqlSer verSample .

mkdir SqlServerSample

2. Navigate to SqlSer verSample and install the SQL Server driver for Go.

cd SqlServerSample
go get github.com/microsoft/go-mssqldb

Create sample data


1. In a text editor, create a file called CreateTestData.sql in the SqlSer verSample folder. In the file, paste
this T-SQL code, which creates a schema, table, and inserts a few rows.
CREATE SCHEMA TestSchema;
GO

CREATE TABLE TestSchema.Employees (


Id INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
Name NVARCHAR(50),
Location NVARCHAR(50)
);
GO

INSERT INTO TestSchema.Employees (Name, Location) VALUES


(N'Jared', N'Australia'),
(N'Nikita', N'India'),
(N'Tom', N'Germany');
GO

SELECT * FROM TestSchema.Employees;


GO

2. Use sqlcmd to connect to the database and run your newly created Azure SQL script. Replace the
appropriate values for your server, database, username, and password.

sqlcmd -S <your_server>.database.windows.net -U <your_username> -P <your_password> -d <your_database>


-i ./CreateTestData.sql

Insert code to query the database


1. Create a file named sample.go in the SqlSer verSample folder.
2. In the file, paste this code. Add the values for your server, database, username, and password. This
example uses the Golang context methods to make sure there's an active connection.

package main

import (
_ "github.com/microsoft/go-mssqldb"
"database/sql"
"context"
"log"
"fmt"
"errors"
)

var db *sql.DB

var server = "<your_server.database.windows.net>"


var port = 1433
var user = "<your_username>"
var password = "<your_password>"
var database = "<your_database>"

func main() {
// Build connection string
connString := fmt.Sprintf("server=%s;user id=%s;password=%s;port=%d;database=%s;",
server, user, password, port, database)

var err error

// Create connection pool


db, err = sql.Open("sqlserver", connString)
if err != nil {
log.Fatal("Error creating connection pool: ", err.Error())
}
}
ctx := context.Background()
err = db.PingContext(ctx)
if err != nil {
log.Fatal(err.Error())
}
fmt.Printf("Connected!\n")

// Create employee
createID, err := CreateEmployee("Jake", "United States")
if err != nil {
log.Fatal("Error creating Employee: ", err.Error())
}
fmt.Printf("Inserted ID: %d successfully.\n", createID)

// Read employees
count, err := ReadEmployees()
if err != nil {
log.Fatal("Error reading Employees: ", err.Error())
}
fmt.Printf("Read %d row(s) successfully.\n", count)

// Update from database


updatedRows, err := UpdateEmployee("Jake", "Poland")
if err != nil {
log.Fatal("Error updating Employee: ", err.Error())
}
fmt.Printf("Updated %d row(s) successfully.\n", updatedRows)

// Delete from database


deletedRows, err := DeleteEmployee("Jake")
if err != nil {
log.Fatal("Error deleting Employee: ", err.Error())
}
fmt.Printf("Deleted %d row(s) successfully.\n", deletedRows)
}

// CreateEmployee inserts an employee record


func CreateEmployee(name string, location string) (int64, error) {
ctx := context.Background()
var err error

if db == nil {
err = errors.New("CreateEmployee: db is null")
return -1, err
}

// Check if database is alive.


err = db.PingContext(ctx)
if err != nil {
return -1, err
}

tsql := `
INSERT INTO TestSchema.Employees (Name, Location) VALUES (@Name, @Location);
select isNull(SCOPE_IDENTITY(), -1);
`

stmt, err := db.Prepare(tsql)


if err != nil {
return -1, err
}
defer stmt.Close()

row := stmt.QueryRowContext(
ctx,
sql.Named("Name", name),
sql.Named("Location", location))
var newID int64
err = row.Scan(&newID)
err = row.Scan(&newID)
if err != nil {
return -1, err
}

return newID, nil


}

// ReadEmployees reads all employee records


func ReadEmployees() (int, error) {
ctx := context.Background()

// Check if database is alive.


err := db.PingContext(ctx)
if err != nil {
return -1, err
}

tsql := fmt.Sprintf("SELECT Id, Name, Location FROM TestSchema.Employees;")

// Execute query
rows, err := db.QueryContext(ctx, tsql)
if err != nil {
return -1, err
}

defer rows.Close()

var count int

// Iterate through the result set.


for rows.Next() {
var name, location string
var id int

// Get values from row.


err := rows.Scan(&id, &name, &location)
if err != nil {
return -1, err
}

fmt.Printf("ID: %d, Name: %s, Location: %s\n", id, name, location)


count++
}

return count, nil


}

// UpdateEmployee updates an employee's information


func UpdateEmployee(name string, location string) (int64, error) {
ctx := context.Background()

// Check if database is alive.


err := db.PingContext(ctx)
if err != nil {
return -1, err
}

tsql := fmt.Sprintf("UPDATE TestSchema.Employees SET Location = @Location WHERE Name = @Name")

// Execute non-query with named parameters


result, err := db.ExecContext(
ctx,
tsql,
sql.Named("Location", location),
sql.Named("Name", name))
if err != nil {
return -1, err
}
return result.RowsAffected()
}

// DeleteEmployee deletes an employee from the database


func DeleteEmployee(name string) (int64, error) {
ctx := context.Background()

// Check if database is alive.


err := db.PingContext(ctx)
if err != nil {
return -1, err
}

tsql := fmt.Sprintf("DELETE FROM TestSchema.Employees WHERE Name = @Name;")

// Execute non-query with named parameters


result, err := db.ExecContext(ctx, tsql, sql.Named("Name", name))
if err != nil {
return -1, err
}

return result.RowsAffected()
}

Run the code


1. At the command prompt, run the following command.

go run sample.go

2. Verify the output.

Connected!
Inserted ID: 4 successfully.
ID: 1, Name: Jared, Location: Australia
ID: 2, Name: Nikita, Location: India
ID: 3, Name: Tom, Location: Germany
ID: 4, Name: Jake, Location: United States
Read 4 row(s) successfully.
Updated 1 row(s) successfully.
Deleted 1 row(s) successfully.

Next steps
Design your first database in Azure SQL Database
Golang driver for SQL Server
Report issues or ask questions
Quickstart: Use Node.js to query a database in
Azure SQL Database or Azure SQL Managed
Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In this quickstart, you use Node.js to connect to a database and query data.

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.

SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM

Create Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell

Configure Server-level IP firewall rule Connectivity from a VM

Connectivity from on- Connect to a SQL Server


premises instance

Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers

Restore or import Restore or import


Adventure Works from a Adventure Works from a
BACPAC file from GitHub BACPAC file from GitHub

Node.js-related software
macOS
Ubuntu
Windows

Install Homebrew and Node.js, and then install the ODBC driver and SQLCMD using steps 1.2 and 1.3 in
Create Node.js apps using SQL Server on macOS.

IMPORTANT
The scripts in this article are written to use the Adventure Works database.
NOTE
You can optionally choose to use an Azure SQL Managed Instance.
To create and configure, use the Azure portal, PowerShell, or CLI, and then set up on-premises or VM connectivity.
To load data, see restore with BACPAC with the Adventure Works file, or see restore the Wide World Importers database.

Get server connection information


Get the connection information you need to connect to the database in Azure SQL Database. You'll need the fully
qualified server name or host name, database name, and login information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Go to the SQL Databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for a database in
Azure SQL Database or the fully qualified server name (or IP address) next to Host for an Azure SQL
Managed Instance or SQL Server on Azure VM. To copy the server name or host name, hover over it and
select the Copy icon.

NOTE
For connection information for SQL Server on Azure VM, see Connect to SQL Server.

Create the project


Open a command prompt and create a folder named sqltest. Open the folder you created and run the following
command:

npm init -y
npm install tedious

Add code to query the database


1. In your favorite text editor, create a new file, sqltest.js.
2. Replace its contents with the following code. Then add the appropriate values for your server, database,
user, and password.

const { Connection, Request } = require("tedious");

// Create connection to database


const config = {
authentication: {
options: {
userName: "username", // update me
password: "password" // update me
},
type: "default"
},
server: "your_server.database.windows.net", // update me
options: {
database: "your_database", //update me
encrypt: true
}
};
};

/*
//Use Azure VM Managed Identity to connect to the SQL database
const config = {
server: process.env["db_server"],
authentication: {
type: 'azure-active-directory-msi-vm',
},
options: {
database: process.env["db_database"],
encrypt: true,
port: 1433
}
};

//Use Azure App Service Managed Identity to connect to the SQL database
const config = {
server: process.env["db_server"],
authentication: {
type: 'azure-active-directory-msi-app-service',
},
options: {
database: process.env["db_database"],
encrypt: true,
port: 1433
}
});

*/

const connection = new Connection(config);

// Attempt to connect and execute queries if connection goes through


connection.on("connect", err => {
if (err) {
console.error(err.message);
} else {
queryDatabase();
}
});

connection.connect();

function queryDatabase() {
console.log("Reading rows from the Table...");

// Read all rows from table


const request = new Request(
`SELECT TOP 20 pc.Name as CategoryName,
p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p ON pc.productcategoryid = p.productcategoryid`,
(err, rowCount) => {
if (err) {
console.error(err.message);
} else {
console.log(`${rowCount} row(s) returned`);
}
}
);

request.on("row", columns => {


columns.forEach(column => {
console.log("%s\t%s", column.metadata.colName, column.value);
});
});

connection.execSql(request);
}

NOTE
For more information about using managed identity for authentication, complete the tutorial to access data via managed
identity.

NOTE
The code example uses the AdventureWorksLT sample database in Azure SQL Database.

Run the code


1. At the command prompt, run the program.

node sqltest.js

2. Verify the top 20 rows are returned and close the application window.

Next steps
Microsoft Node.js Driver for SQL Server
Connect and query on Windows/Linux/macOS with .NET core, Visual Studio Code, or SSMS (Windows
only)
Get started with .NET Core on Windows/Linux/macOS using the command line
Design your first database in Azure SQL Database using .NET or SSMS
Quickstart: Use PHP to query a database in Azure
SQL Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article demonstrates how to use PHP to connect to a database in Azure SQL Database or Azure SQL
Managed Instance. You can then use T-SQL statements to query data.

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create and then configure a database:

SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM

Create Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell

Configure Server-level IP firewall rule Connectivity from a VM

Connectivity from on- Connect to a SQL Server


premises instance

Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers

Restore or import Restore or import


Adventure Works from a Adventure Works from a
BACPAC file from GitHub BACPAC file from GitHub

IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.

PHP-related software installed for your operating system:


macOS , install PHP, the ODBC driver, then install the PHP Driver for SQL Server. See Step 1, 2, and
3.
Linux , install PHP, the ODBC driver, then install the PHP Driver for SQL Server. See Step 1, 2, and 3.
Windows , install PHP and PHP Drivers, then install the ODBC driver and SQLCMD. See Step 1.2
and 1.3.

Get server connection information


Get the connection information you need to connect to the database in Azure SQL Database. You'll need the fully
qualified server name or host name, database name, and login information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Navigate to the SQL Databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for a database in
Azure SQL Database or the fully qualified server name (or IP address) next to Host for an Azure SQL
Managed Instance or SQL Server in an Azure VM. To copy the server name or host name, hover over it
and select the Copy icon.

NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.

Add code to query the database


1. In your favorite text editor, create a new file, sqltest.php.
2. Replace its contents with the following code. Then add the appropriate values for your server, database,
user, and password.

<?php
$serverName = "your_server.database.windows.net"; // update me
$connectionOptions = array(
"Database" => "your_database", // update me
"Uid" => "your_username", // update me
"PWD" => "your_password" // update me
);
//Establishes the connection
$conn = sqlsrv_connect($serverName, $connectionOptions);
$tsql= "SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid";
$getResults= sqlsrv_query($conn, $tsql);
echo ("Reading data from table" . PHP_EOL);
if ($getResults == FALSE)
echo (sqlsrv_errors());
while ($row = sqlsrv_fetch_array($getResults, SQLSRV_FETCH_ASSOC)) {
echo ($row['CategoryName'] . " " . $row['ProductName'] . PHP_EOL);
}
sqlsrv_free_stmt($getResults);
?>

Run the code


1. At the command prompt, run the app.

php sqltest.php
2. Verify the top 20 rows are returned and close the app window.

Next steps
Design your first database in Azure SQL Database
Microsoft PHP Drivers for SQL Server
Report issues or ask questions
Retry logic example: Connect resiliently to Azure SQL with PHP
Quickstart: Use Python to query a database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
In this quickstart, you use Python to connect to Azure SQL Database, Azure SQL Managed Instance, or Synapse
SQL database and use T-SQL statements to query data.

Prerequisites
To complete this quickstart, you need:
An Azure account with an active subscription. Create an account for free.
A database where you will run a query.
You can use one of these quickstarts to create and then configure a database:

SQ L M A N A GED SQ L SERVER O N A Z URE SY N A P SE


A C T IO N SQ L DATA B A SE IN STA N C E A Z URE VM A N A LY T IC S

Create Portal Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell PowerShell

Deployment Deployment
template template

Configure Server-level IP Connectivity from a


firewall rule VM

Connectivity from Connect to a SQL


on-premises Server instance

Get connection Azure SQL Azure SQL SQL VM Synapse SQL


information

Python 3 and related software

A C T IO N MAC OS UB UN T U W IN DO W S
A C T IO N MAC OS UB UN T U W IN DO W S

Install the ODBC driver, Use steps 1.2 , 1.3 , and Configure an Configure an environment
SQLCMD, and the Python 2.1 in create Python apps environment for pyodbc for pyodbc Python
driver for SQL Server using SQL Server on Python development development.
macOS. This will also
install install Homebrew
and Python.

Although the linked


article references SQL
Server, these steps are
also applicable to Azure
SQL Database, Azure SQL
Managed Instance, and
Azure Synapse Analytics.

Install Python and other Use


required packages sudo apt-get install
python python-pip gcc
g++ build-essential
.

Further information Microsoft ODBC driver on Microsoft ODBC driver on Microsoft ODBC driver on
macOS Linux Linux

To further explore Python and the database in Azure SQL Database, see Azure SQL Database libraries for Python,
the pyodbc repository, and a pyodbc sample.

Create code to query your database


1. In a text editor, create a new file named sqltest.py.
2. Add the following code. Get the connection information from the prerequisites section and substitute
your own values for <server>, <database>, <username>, and <password>.

import pyodbc
server = '<server>.database.windows.net'
database = '<database>'
username = '<username>'
password = '{<password>}'
driver= '{ODBC Driver 17 for SQL Server}'

with
pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';PORT=1433;DATABASE='+database+';UID='+usernam
e+';PWD='+ password) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT TOP 3 name, collation_name FROM sys.databases")
row = cursor.fetchone()
while row:
print (str(row[0]) + " " + str(row[1]))
row = cursor.fetchone()

Run the code


1. At a command prompt, run the following command:

python sqltest.py
2. Verify that the databases and their collations are returned, and then close the command window.

Next steps
Design your first database in Azure SQL Database
Microsoft Python drivers for SQL Server
Python developer center
Quickstart: Use Ruby to query a database in Azure
SQL Database or Azure SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This quickstart demonstrates how to use Ruby to connect to a database and query data with Transact-SQL
statements.

Prerequisites
To complete this quickstart, you need the following prerequisites:
A database. You can use one of these quickstarts to create and then configure the database:

SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VIRT UA L M A C H IN ES

Create Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell

Configure Server-level IP firewall rule Connectivity from a VM

Connectivity from on- Connect to a SQL Server


premises instance

Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers

Restore or import Restore or import


Adventure Works from a Adventure Works from a
BACPAC file from GitHub BACPAC file from GitHub

IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, either
import the Adventure Works database into an instance database or modify the scripts in this article to use the
Wide World Importers database.

Ruby and related software for your operating system:


macOS : Install Homebrew, rbenv and ruby-build, Ruby, FreeTDS, and TinyTDS. See Steps 1.2, 1.3,
1.4, 1.5, and 2.1 in Create Ruby apps using SQL Server on macOS.
Ubuntu : Install prerequisites for Ruby, rbenv and ruby-build, Ruby, FreeTDS, and TinyTDS. See
Steps 1.2, 1.3, 1.4, 1.5, and 2.1 in Create Ruby apps using SQL Server on Ubuntu.
Windows : Install Ruby, Ruby Devkit, and TinyTDS. See Configure development environment for
Ruby development.

Get server connection information


Get the information you need to connect to a database in Azure SQL Database. You'll need the fully qualified
server name or host name, database name, and sign-in information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Navigate to the SQL databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for a database in
Azure SQL Database or the fully qualified server name (or IP address) next to Host for an Azure SQL
Managed Instance or SQL Server on Virtual Machines. To copy the server name or host name, hover over
it and select the Copy icon.

NOTE
For connection information for SQL Server on Azure Virtual Machines, see Connect to a SQL Server instance.

Create code to query your database in Azure SQL Database


1. In a text or code editor, create a new file named sqltest.rb.
2. Add the following code. Substitute the values from your database in Azure SQL Database for <server> ,
<database> , <username> , and <password> .

require 'tiny_tds'
server = '<server>.database.windows.net'
database = '<database>'
username = '<username>'
password = '<password>'
client = TinyTds::Client.new username: username, password: password,
host: server, port: 1433, database: database, azure: true

puts "Reading data from table"


tsql = "SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid"
result = client.execute(tsql)
result.each do |row|
puts row
end

IMPORTANT
This example uses the sample AdventureWorksLT data, which you can choose as source when creating your
database. If your database has different data, use tables from your own database in the SELECT query.

Run the code


1. At a command prompt, run the following command:

ruby sqltest.rb
2. Verify that the top 20 Category/Product rows from your database are returned.

Next steps
Design your first database in Azure SQL Database
GitHub repository for TinyTDS
Report issues or ask questions about TinyTDS
Ruby driver for SQL Server
Manage historical data in Temporal tables with
retention policy
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Temporal tables may increase database size more than regular tables, especially if you retain historical data for a
longer period of time. Hence, retention policy for historical data is an important aspect of planning and
managing the lifecycle of every temporal table. Temporal tables in Azure SQL Database and Azure SQL Managed
Instance come with easy-to-use retention mechanism that helps you accomplish this task.
Temporal history retention can be configured at the individual table level, which allows users to create flexible
aging policies. Applying temporal retention is simple: it requires only one parameter to be set during table
creation or schema change.
After you define retention policy, Azure SQL Database and Azure SQL Managed Instance starts checking
regularly if there are historical rows that are eligible for automatic data cleanup. Identification of matching rows
and their removal from the history table occur transparently, in the background task that is scheduled and run
by the system. Age condition for the history table rows is checked based on the column representing end of
SYSTEM_TIME period. If retention period, for example, is set to six months, table rows eligible for cleanup satisfy
the following condition:

ValidTo < DATEADD (MONTH, -6, SYSUTCDATETIME())

In the preceding example, we assumed that ValidTo column corresponds to the end of SYSTEM_TIME period.

How to configure retention policy


Before you configure retention policy for a temporal table, check first whether temporal historical retention is
enabled at the database level.

SELECT is_temporal_history_retention_enabled, name


FROM sys.databases

Database flag is_temporal_histor y_retention_enabled is set to ON by default, but users can change it with
ALTER DATABASE statement. It is also automatically set to OFF after point in time restore operation. To enable
temporal history retention cleanup for your database, execute the following statement:

ALTER DATABASE [<myDB>]


SET TEMPORAL_HISTORY_RETENTION ON

IMPORTANT
You can configure retention for temporal tables even if is_temporal_histor y_retention_enabled is OFF, but automatic
cleanup for aged rows is not triggered in that case.

Retention policy is configured during table creation by specifying value for the HISTORY_RETENTION_PERIOD
parameter:
CREATE TABLE dbo.WebsiteUserInfo
(
[UserID] int NOT NULL PRIMARY KEY CLUSTERED
, [UserName] nvarchar(100) NOT NULL
, [PagesVisited] int NOT NULL
, [ValidFrom] datetime2 (0) GENERATED ALWAYS AS ROW START
, [ValidTo] datetime2 (0) GENERATED ALWAYS AS ROW END
, PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH
(
SYSTEM_VERSIONING = ON
(
HISTORY_TABLE = dbo.WebsiteUserInfoHistory,
HISTORY_RETENTION_PERIOD = 6 MONTHS
)
);

Azure SQL Database and Azure SQL Managed Instance allow you to specify retention period by using different
time units: DAYS, WEEKS, MONTHS, and YEARS. If HISTORY_RETENTION_PERIOD is omitted, INFINITE retention
is assumed. You can also use INFINITE keyword explicitly.
In some scenarios, you may want to configure retention after table creation, or to change previously configured
value. In that case use ALTER TABLE statement:

ALTER TABLE dbo.WebsiteUserInfo


SET (SYSTEM_VERSIONING = ON (HISTORY_RETENTION_PERIOD = 9 MONTHS));

IMPORTANT
Setting SYSTEM_VERSIONING to OFF does not preserve retention period value. Setting SYSTEM_VERSIONING to ON
without HISTORY_RETENTION_PERIOD specified explicitly results in the INFINITE retention period.

To review current state of the retention policy, use the following query that joins temporal retention enablement
flag at the database level with retention periods for individual tables:

SELECT DB.is_temporal_history_retention_enabled,
SCHEMA_NAME(T1.schema_id) AS TemporalTableSchema,
T1.name as TemporalTableName, SCHEMA_NAME(T2.schema_id) AS HistoryTableSchema,
T2.name as HistoryTableName,T1.history_retention_period,
T1.history_retention_period_unit_desc
FROM sys.tables T1
OUTER APPLY (select is_temporal_history_retention_enabled from sys.databases
where name = DB_NAME()) AS DB
LEFT JOIN sys.tables T2
ON T1.history_table_id = T2.object_id WHERE T1.temporal_type = 2

How ages rows are deleted


The cleanup process depends on the index layout of the history table. It is important to notice that only history
tables with a clustered index (B-tree or columnstore) can have finite retention policy configured. A background
task is created to perform aged data cleanup for all temporal tables with finite retention period. Cleanup logic
for the rowstore (B-tree) clustered index deletes aged row in smaller chunks (up to 10K) minimizing pressure on
database log and IO subsystem. Although cleanup logic utilizes required B-tree index, order of deletions for the
rows older than retention period cannot be firmly guaranteed. Hence, do not take any dependency on the
cleanup order in your applications.
The cleanup task for the clustered columnstore removes entire row groups at once (typically contain 1 million of
rows each), which is very efficient, especially when historical data is generated at a high pace.

Excellent data compression and efficient retention cleanup makes clustered columnstore index a perfect choice
for scenarios when your workload rapidly generates high amount of historical data. That pattern is typical for
intensive transactional processing workloads that use temporal tables for change tracking and auditing, trend
analysis, or IoT data ingestion.

Index considerations
The cleanup task for tables with rowstore clustered index requires index to start with the column corresponding
the end of SYSTEM_TIME period. If such index doesn't exist, you cannot configure a finite retention period:
Msg 13765, Level 16, State 1

Setting finite retention period failed on system-versioned temporal table


'temporalstagetestdb.dbo.WebsiteUserInfo' because the history table
'temporalstagetestdb.dbo.WebsiteUserInfoHistory' does not contain required clustered index. Consider creating
a clustered columnstore or B-tree index starting with the column that matches end of SYSTEM_TIME period, on
the history table.
It is important to notice that the default history table created by Azure SQL Database and Azure SQL Managed
Instance already has clustered index, which is compliant for retention policy. If you try to remove that index on a
table with finite retention period, operation fails with the following error:
Msg 13766, Level 16, State 1

Cannot drop the clustered index 'WebsiteUserInfoHistory.IX_WebsiteUserInfoHistory' because it is being used


for automatic cleanup of aged data. Consider setting HISTORY_RETENTION_PERIOD to INFINITE on the
corresponding system-versioned temporal table if you need to drop this index.
Cleanup on the clustered columnstore index works optimally if historical rows are inserted in the ascending
order (ordered by the end of period column), which is always the case when the history table is populated
exclusively by the SYSTEM_VERSIONIOING mechanism. If rows in the history table are not ordered by end of
period column (which may be the case if you migrated existing historical data), you should re-create clustered
columnstore index on top of B-tree rowstore index that is properly ordered, to achieve optimal performance.
Avoid rebuilding clustered columnstore index on the history table with the finite retention period, because it
may change ordering in the row groups naturally imposed by the system-versioning operation. If you need to
rebuild clustered columnstore index on the history table, do that by re-creating it on top of compliant B-tree
index, preserving ordering in the rowgroups necessary for regular data cleanup. The same approach should be
taken if you create temporal table with existing history table that has clustered column index without
guaranteed data order:
/*Create B-tree ordered by the end of period column*/
CREATE CLUSTERED INDEX IX_WebsiteUserInfoHistory ON WebsiteUserInfoHistory (ValidTo)
WITH (DROP_EXISTING = ON);
GO
/*Re-create clustered columnstore index*/
CREATE CLUSTERED COLUMNSTORE INDEX IX_WebsiteUserInfoHistory ON WebsiteUserInfoHistory
WITH (DROP_EXISTING = ON);

When finite retention period is configured for the history table with the clustered columnstore index, you cannot
create additional non-clustered B-tree indexes on that table:

CREATE NONCLUSTERED INDEX IX_WebHistNCI ON WebsiteUserInfoHistory ([UserName])

An attempt to execute above statement fails with the following error:


Msg 13772, Level 16, State 1

Cannot create non-clustered index on a temporal history table 'WebsiteUserInfoHistory' since it has finite
retention period and clustered columnstore index defined.

Querying tables with retention policy


All queries on the temporal table automatically filter out historical rows matching finite retention policy, to avoid
unpredictable and inconsistent results, since aged rows can be deleted by the cleanup task, at any point in time
and in arbitrary order.
The following picture shows the query plan for a simple query:

SELECT * FROM dbo.WebsiteUserInfo FOR SYSTEM_TIME ALL;

The query plan includes additional filter applied to end of period column (ValidTo) in the Clustered Index Scan
operator on the history table (highlighted). This example assumes that one MONTH retention period was set on
WebsiteUserInfo table.
However, if you query history table directly, you may see rows that are older than specified retention period, but
without any guarantee for repeatable query results. The following picture shows query execution plan for the
query on the history table without additional filters applied:
Do not rely your business logic on reading history table beyond retention period as you may get inconsistent or
unexpected results. We recommend that you use temporal queries with FOR SYSTEM_TIME clause for analyzing
data in temporal tables.

Point in time restore considerations


When you create new database by restoring existing database to a specific point in time, it has temporal
retention disabled at the database level. (is_temporal_histor y_retention_enabled flag set to OFF). This
functionality allows you to examine all historical rows upon restore, without worrying that aged rows are
removed before you get to query them. You can use it to inspect historical data beyond configured retention
period.
Say that a temporal table has one MONTH retention period specified. If your database was created in Premium
Service tier, you would be able to create database copy with the database state up to 35 days back in the past.
That effectively would allow you to analyze historical rows that are up to 65 days old by querying the history
table directly.
If you want to activate temporal retention cleanup, run the following Transact-SQL statement after point in time
restore:
ALTER DATABASE [<myDB>]
SET TEMPORAL_HISTORY_RETENTION ON

Next steps
To learn how to use temporal tables in your applications, check out Getting Started with Temporal Tables.
For detailed information about temporal tables, review Temporal tables.
Manage Azure SQL Database long-term backup
retention
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


With Azure SQL Database, you can set a long-term backup retention policy (LTR) to automatically retain backups
in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these
backups using the Azure portal, Azure CLI, or PowerShell. Long-term retention policies are also supported for
Azure SQL Managed Instance.

Prerequisites
Portal
Azure CLI
PowerShell

An active Azure subscription.

Create long-term retention policies


Portal
Azure CLI
PowerShell

You can configure SQL Database to retain automated backups for a period longer than the retention period for
your service tier.
1. In the Azure portal, navigate to your server and then select Backups . Select the Retention policies tab
to modify your backup retention settings.

2. On the Retention policies tab, select the database(s) on which you want to set or modify long-term
backup retention policies. Unselected databases will not be affected.
3. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.

4. Select Apply to apply the chosen retention settings to all selected databases.

IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadance, see long-term backup retention.
View backups and restore from a backup
View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
Portal
Azure CLI
PowerShell

1. In the Azure portal, navigate to your server and then select Backups . To view the available LTR backups
for a specific database, select Manage under the Available LTR backups column. A pane will appear with
a list of the available LTR backups for the selected database.

2. In the Available LTR backups pane that appears, review the available backups. You may select a backup
to restore from or to delete.

3. To restore from an available LTR backup, select the backup from which you want to restore, and then
select Restore .
4. Choose a name for your new database, then select Review + Create to review the details of your
Restore. Select Create to restore your database from the chosen backup.

5. On the toolbar, select the notification icon to view the status of the restore job.

6. When the restore job is completed, open the SQL databases page to view the newly restored database.

NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.

Limitations
When restoring from an LTR backup, the read scale property is disabled. To enable, read scale on the restored
database, update the database after it has been created.
You need to specify the target service level objective, when restoring from an LTR backup, which was created
when the database was in an elastic pool.

Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Create Azure AD guest users and set as an Azure
AD admin
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Guest users in Azure Active Directory (Azure AD) are users that have been imported into the current Azure AD
from other Azure Active Directories, or outside of it. For example, guest users can include users from other
Azure Active Directories, or from accounts like @outlook.com, @hotmail.com, @live.com, or @gmail.com.
This article demonstrates how to create an Azure AD guest user and set that user as an Azure AD admin for
Azure SQL Managed Instance or the logical server in Azure used by Azure SQL Database and Azure Synapse
Analytics, without having to add the guest user to a group inside Azure AD.

Feature description
This feature lifts the current limitation that only allows guest users to connect to Azure SQL Database, SQL
Managed Instance, or Azure Synapse Analytics when they're members of a group created in Azure AD. The
group needed to be mapped to a user manually using the CREATE USER (Transact-SQL) statement in a given
database. Once a database user has been created for the Azure AD group containing the guest user, the guest
user can sign into the database using Azure Active Directory with MFA authentication. Guest users can be
created and connect directly to SQL Database, SQL Managed Instance, or Azure Synapse without the
requirement of adding them to an Azure AD group first, and then creating a database user for that Azure AD
group.
As part of this feature, you also have the ability to set the Azure AD guest user directly as an AD admin for the
logical server or for a managed instance. The existing functionality (which allows the guest user to be part of an
Azure AD group that can then be set as the Azure AD admin for the logical server or managed instance) is not
impacted. Guest users in the database that are a part of an Azure AD group are also not impacted by this
change.
For more information about existing support for guest users using Azure AD groups, see Using multi-factor
Azure Active Directory authentication.

Prerequisite
Az.Sql 2.9.0 module or higher is needed when using PowerShell to set a guest user as an Azure AD admin for
the logical server or managed instance.

Create database user for Azure AD guest user


Follow these steps to create a database user using an Azure AD guest user.
Create guest user in SQL Database and Azure Synapse
1. Ensure that the guest user (for example, user1@gmail.com ) is already added into your Azure AD and an
Azure AD admin has been set for the database server. Having an Azure AD admin is required for Azure
Active Directory authentication.
2. Connect to the SQL database as the Azure AD admin or an Azure AD user with sufficient SQL permissions
to create users, and run the below command on the database where the guest user needs to be added:
CREATE USER [user1@gmail.com] FROM EXTERNAL PROVIDER

3. There should now be a database user created for the guest user user1@gmail.com .
4. Run the below command to verify the database user got created successfully:

SELECT * FROM sys.database_principals

5. Disconnect and sign into the database as the guest user user1@gmail.com using SQL Server Management
Studio (SSMS) using the authentication method Azure Active Director y - Universal with MFA . For
more information, see Using multi-factor Azure Active Directory authentication.
Create guest user in SQL Managed Instance

NOTE
SQL Managed Instance supports logins for Azure AD users, as well as Azure AD contained database users. The below
steps show how to create a login and user for an Azure AD guest user in SQL Managed Instance. You can also choose to
create a contained database user in SQL Managed Instance by using the method in the Create guest user in SQL
Database and Azure Synapse section.

1. Ensure that the guest user (for example, user1@gmail.com ) is already added into your Azure AD and an
Azure AD admin has been set for the SQL Managed Instance server. Having an Azure AD admin is
required for Azure Active Directory authentication.
2. Connect to the SQL Managed Instance server as the Azure AD admin or an Azure AD user with sufficient
SQL permissions to create users, and run the following command on the master database to create a
login for the guest user:

CREATE LOGIN [user1@gmail.com] FROM EXTERNAL PROVIDER

3. There should now be a login created for the guest user user1@gmail.com in the master database.
4. Run the below command to verify the login got created successfully:

SELECT * FROM sys.server_principals

5. Run the below command on the database where the guest user needs to be added:

CREATE USER [user1@gmail.com] FROM LOGIN [user1@gmail.com]

6. There should now be a database user created for the guest user user1@gmail.com .
7. Disconnect and sign into the database as the guest user user1@gmail.com using SQL Server Management
Studio (SSMS) using the authentication method Azure Active Director y - Universal with MFA . For
more information, see Using multi-factor Azure Active Directory authentication.

Setting a guest user as an Azure AD admin


Set the Azure AD admin using either the Azure portal, Azure PowerShell, or the Azure CLI.
Azure portal
To setup an Azure AD admin for a logical server or a managed instance using the Azure portal, follow these
steps:
1. Open the Azure portal.
2. Navigate to your SQL server or managed instance Azure Active Director y settings.
3. Select Set Admin .
4. In the Azure AD pop-up prompt, type the guest user, such as guestuser@gmail.com .
5. Select this new user, and then save the operation.
For more information, see Setting Azure AD admin.
Azure PowerShell (SQL Database and Azure Synapse )
To setup an Azure AD guest user for a logical server, follow these steps:
1. Ensure that the guest user (for example, user1@gmail.com ) is already added into your Azure AD.
2. Run the following PowerShell command to add the guest user as the Azure AD admin for your logical
server:
Replace <ResourceGroupName> with your Azure Resource Group name that contains the logical server.
Replace <ServerName> with your logical server name. If your server name is
myserver.database.windows.net , replace <Server Name> with myserver .
Replace <DisplayNameOfGuestUser> with your guest user name.

Set-AzSqlServerActiveDirectoryAdministrator -ResourceGroupName <ResourceGroupName> -ServerName


<ServerName> -DisplayName <DisplayNameOfGuestUser>

You can also use the Azure CLI command az sql server ad-admin to set the guest user as an Azure AD admin for
your logical server.
Azure PowerShell (SQL Managed Instance )
To setup an Azure AD guest user for a managed instance, follow these steps:
1. Ensure that the guest user (for example, user1@gmail.com ) is already added into your Azure AD.
2. Go to the Azure portal, and go to your Azure Active Director y resource. Under Manage , go to the
Users pane. Select your guest user, and record the Object ID .
3. Run the following PowerShell command to add the guest user as the Azure AD admin for your SQL
Managed Instance:
Replace <ResourceGroupName> with your Azure Resource Group name that contains the SQL Managed
Instance.
Replace <ManagedInstanceName> with your SQL Managed Instance name.
Replace <DisplayNameOfGuestUser> with your guest user name.
Replace <AADObjectIDOfGuestUser> with the Object ID gathered earlier.

Set-AzSqlInstanceActiveDirectoryAdministrator -ResourceGroupName <ResourceGroupName> -InstanceName "


<ManagedInstanceName>" -DisplayName <DisplayNameOfGuestUser> -ObjectId <AADObjectIDOfGuestUser>

You can also use the Azure CLI command az sql mi ad-admin to set the guest user as an Azure AD admin for
your managed instance.

Next steps
Configure and manage Azure AD authentication with Azure SQL
Using multi-factor Azure Active Directory authentication
CREATE USER (Transact-SQL)
Tutorial: Assign Directory Readers role to an Azure
AD group and manage role assignments
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article guides you through creating a group in Azure Active Directory (Azure AD), and assigning that group
the Director y Readers role. The Directory Readers permissions allow the group owners to add additional
members to the group, such as a managed identity of Azure SQL Database, Azure SQL Managed Instance, and
Azure Synapse Analytics. This bypasses the need for a Global Administrator or Privileged Role Administrator to
assign the Directory Readers role directly for each Azure SQL logical server identity in the tenant.
This tutorial uses the feature introduced in Use Azure AD groups to manage role assignments.
For more information on the benefits of assigning the Directory Readers role to an Azure AD group for Azure
SQL, see Directory Readers role in Azure Active Directory for Azure SQL.

NOTE
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL.

Prerequisites
An Azure AD instance. For more information, see Configure and manage Azure AD authentication with Azure
SQL.
A SQL Database, SQL Managed Instance, or Azure Synapse.

Directory Readers role assignment using the Azure portal


Create a new group and assign owners and role
1. A user with Global Administrator or Privileged Role Administrator permissions is required for this initial
setup.
2. Have the privileged user sign into the Azure portal.
3. Go to the Azure Active Director y resource. Under Managed , go to Groups . Select New group to
create a new group.
4. Select Security as the group type, and fill in the rest of the fields. Make sure that the setting Azure AD
roles can be assigned to the group is switched to Yes . Then assign the Azure AD Director y readers
role to the group.
5. Assign Azure AD users as owner(s) to the group that was created. A group owner can be a regular AD
user without any Azure AD administrative role assigned. The owner should be a user that is managing
your SQL Database, SQL Managed Instance, or Azure Synapse.
6. Select Create
Checking the group that was created

NOTE
Make sure that the Group Type is Security . Microsoft 365 groups are not supported for Azure SQL.

To check and manage the group that was created, go back to the Groups pane in the Azure portal, and search
for your group name. Additional owners and members can be added under the Owners and Members menu
of Manage setting after selecting your group. You can also review the Assigned roles for the group.
Add Azure SQL managed identity to the group

NOTE
We're using SQL Managed Instance for this example, but similar steps can be applied for SQL Database or Azure Synapse
to achieve the same results.

For subsequent steps, the Global Administrator or Privileged Role Administrator user is no longer needed.
1. Log into the Azure portal as the user managing SQL Managed Instance, and is an owner of the group
created earlier.
2. Find the name of your SQL managed instance resource in the Azure portal.

During the creation of your SQL Managed Instance, an Azure identity was created for your instance. The
created identity has the same name as the prefix of your SQL Managed Instance name. You can find the
service principal for your SQL Managed Instance identity that created as an Azure AD Application by
following these steps:
Go to the Azure Active Director y resource. Under the Manage setting, select Enterprise
applications . The Object ID is the identity of the instance.
3. Go to the Azure Active Director y resource. Under Managed , go to Groups . Select the group that you
created. Under the Managed setting of your group, select Members . Select Add members and add
your SQL Managed Instance service principal as a member of the group by searching for the name found
above.

NOTE
It can take a few minutes to propagate the service principal permissions through the Azure system, and allow access to
Microsoft Graph API. You may have to wait a few minutes before you provision an Azure AD admin for SQL Managed
Instance.

Remarks
For SQL Database and Azure Synapse, the server identity can be created during the Azure SQL logical server
creation or after the server was created. For more information on how to create or set the server identity in SQL
Database or Azure Synapse, see Enable service principals to create Azure AD users.
For SQL Managed Instance, the Director y Readers role must be assigned to managed instance identity before
you can set up an Azure AD admin for the managed instance.
Assigning the Director y Readers role to the server identity isn't required for SQL Database or Azure Synapse
when setting up an Azure AD admin for the logical server. However, to enable an Azure AD object creation in
SQL Database or Azure Synapse on behalf of an Azure AD application, the Director y Readers role is required.
If the role isn't assigned to the SQL logical server identity, creating Azure AD users in Azure SQL will fail. For
more information, see Azure Active Directory service principal with Azure SQL.

Directory Readers role assignment using PowerShell


IMPORTANT
A Global Administrator or Privileged Role Administrator will need to run these initial steps. In addition to PowerShell,
Azure AD offers Microsoft Graph API to Create a role-assignable group in Azure AD.

1. Download the Azure AD PowerShell module using the following commands. You may need to run
PowerShell as an administrator.
Install-Module azuread
Import-Module azuread
#To verify that the module is ready to use, use the following command:
Get-Module azuread

2. Connect to your Azure AD tenant.

Connect-AzureAD

3. Create a security group to assign the Director y Readers role.


DirectoryReaderGroup , Directory Reader Group , and DirRead can be changed according to your
preference.

$group = New-AzureADMSGroup -DisplayName "DirectoryReaderGroup" -Description "Directory Reader Group"


-MailEnabled $False -SecurityEnabled $true -MailNickName "DirRead" -IsAssignableToRole $true
$group

4. Assign Director y Readers role to the group.

# Displays the Directory Readers role information


$roleDefinition = Get-AzureADMSRoleDefinition -Filter "displayName eq 'Directory Readers'"
$roleDefinition

# Assigns the Directory Readers role to the group


$roleAssignment = New-AzureADMSRoleAssignment -ResourceScope '/' -RoleDefinitionId $roleDefinition.Id
-PrincipalId $group.Id
$roleAssignment

5. Assign owners to the group.


Replace <username> with the user you want to own this group. Several owners can be added by
repeating these steps.

$RefObjectID = Get-AzureADUser -ObjectId "<username>"


$RefObjectID

$GrOwner = Add-AzureADGroupOwner -ObjectId $group.ID -RefObjectId $RefObjectID.ObjectID

Check owners of the group using the following command:

Get-AzureADGroupOwner -ObjectId $group.ID

You can also verify owners of the group in the Azure portal. Follow the steps in Checking the group that
was created.
Assigning the service principal as a member of the group
For subsequent steps, the Global Administrator or Privileged Role Administrator user is no longer needed.
1. Using an owner of the group, that also manages the Azure SQL resource, run the following command to
connect to your Azure AD.
Connect-AzureAD

2. Assign the service principal as a member of the group that was created.
Replace <ServerName> with your Azure SQL logical server name, or your Managed Instance name. For
more information, see the section, Add Azure SQL service identity to the group

# Returns the service principal of your Azure SQL resource


$miIdentity = Get-AzureADServicePrincipal -SearchString "<ServerName>"
$miIdentity

# Adds the service principal to the group as a member


Add-AzureADGroupMember -ObjectId $group.ID -RefObjectId $miIdentity.ObjectId

The following command will return the service principal Object ID indicating that it has been added to the
group:

Add-AzureADGroupMember -ObjectId $group.ID -RefObjectId $miIdentity.ObjectId

Next steps
Directory Readers role in Azure Active Directory for Azure SQL
Tutorial: Create Azure AD users using Azure AD applications
Configure and manage Azure AD authentication with Azure SQL
Tutorial: Enable Azure Active Directory only
authentication with Azure SQL
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article guides you through enabling the Azure AD-only authentication feature within Azure SQL Database
and Azure SQL Managed Instance. If you are looking to provision a SQL Database or SQL Managed Instance
with Azure AD-only authentication enabled, see Create server with Azure AD-only authentication enabled in
Azure SQL.
In this tutorial, you learn how to:
Assign role to enable Azure AD-only authentication
Enable Azure AD-only authentication using the Azure portal, Azure CLI, or PowerShell
Check whether Azure AD-only authentication is enabled
Test connecting to Azure SQL
Disable Azure AD-only authentication using the Azure portal, Azure CLI, or PowerShell

Prerequisites
An Azure AD instance. For more information, see Configure and manage Azure AD authentication with Azure
SQL.
A SQL Database or SQL Managed Instance with a database, and logins or users. See Quickstart: Create an
Azure SQL Database single database if you haven't already created an Azure SQL Database, or Quickstart:
Create an Azure SQL Managed Instance.

Assign role to enable Azure AD-only authentication


In order to enable or disable Azure AD-only authentication, selected built-in roles are required for the Azure AD
users executing these operations in this tutorial. We're going to assign the SQL Security Manager role to the
user in this tutorial.
For more information on how to assign a role to an Azure AD account, see Assign administrator and non-
administrator roles to users with Azure Active Directory
For more information on the required permission to enable or disable Azure AD-only authentication, see the
Permissions section of Azure AD-only authentication article.
1. In our example, we'll assign the SQL Security Manager role to the user
UserSqlSecurityManager@contoso.onmicrosoft.com . Using privileged user that can assign Azure AD roles,
sign into the Azure portal.
2. Go to your SQL server resource, and select Access control (IAM) in the menu. Select the Add button
and then Add role assignment in the drop-down menu.
3. In the Add role assignment pane, select the Role SQL Security Manager , and select the user that you
want to have the ability to enable or disable Azure AD-only authentication.

4. Click Save .

Enable Azure AD-only authentication


Portal
The Azure CLI
PowerShell

Enable in SQL Database using Azure portal


To enable Azure AD-only authentication auth in the Azure portal, see the steps below.
1. Using the user with the SQL Security Manager role, go to the Azure portal.
2. Go to your SQL server resource, and select Azure Active Director y under the Settings menu.

3. If you haven't added an Azure Active Director y admin , you'll need to set this before you can enable
Azure AD-only authentication.
4. Select the Suppor t only Azure Active Director y authentication for this ser ver checkbox.
5. The Enable Azure AD authentication only popup will show. Click Yes to enable the feature and Save
the setting.

Enable in SQL Managed Instance using Azure portal


To enable Azure AD-only authentication auth in the Azure portal, see the steps below.
1. Using the user with the SQL Security Manager role, go to the Azure portal.
2. Go to your SQL managed instance resource, and select Active Director y admin under the Settings
menu.
3. If you haven't added an Azure Active Director y admin , you'll need to set this before you can enable
Azure AD-only authentication.
4. Select the Suppor t only Azure Active Director y authentication for this managed instance
checkbox.
5. The Enable Azure AD authentication only popup will show. Click Yes to enable the feature and Save
the setting.

Check the Azure AD-only authentication status


Check whether Azure AD-only authentication is enabled for your server or instance.

Portal
The Azure CLI
PowerShell

Check status in SQL Database


Go to your SQL ser ver resource in the Azure portal. Select Azure Active Director y under the Settings
menu.
Check status in SQL Managed Instance
Go to your SQL managed instance resource in the Azure portal. Select Active Director y admin under the
Settings menu.

Test SQL authentication with connection failure


After enabling Azure AD-only authentication, test with SQL Server Management Studio (SSMS) to connect to
your SQL Database or SQL Managed Instance. Use SQL authentication for the connection.
You should see a login failed message similar to the following output:

Cannot connect to <myserver>.database.windows.net.


Additional information:
Login failed for user 'username'. Reason: Azure Active Directory only authentication is enabled.
Please contact your system administrator. (Microsoft SQL Server, Error: 18456)

Disable Azure AD-only authentication


By disabling the Azure AD-only authentication feature, you allow both SQL authentication and Azure AD
authentication for Azure SQL.

Portal
The Azure CLI
PowerShell

Disable in SQL Database using Azure portal


1. Using the user with the SQL Security Manager role, go to the Azure portal.
2. Go to your SQL server resource, and select Azure Active Director y under the Settings menu.
3. To disable the Azure AD-only authentication feature, uncheck the Suppor t only Azure Active Director y
authentication for this ser ver checkbox and Save the setting.

Disable in SQL Managed Instance using Azure portal


1. Using the user with the SQL Security Manager role, go to the Azure portal.
2. Go to your SQL managed instance resource, and select Active Director y admin under the Settings
menu.
3. To disable the Azure AD-only authentication feature, uncheck the Suppor t only Azure Active Director y
authentication for this managed instance checkbox and Save the setting.

Test connecting to Azure SQL again


After disabling Azure AD-only authentication, test connecting using a SQL authentication login. You should now
be able to connect to your server or instance.

Next steps
Azure AD-only authentication with Azure SQL
Create server with Azure AD-only authentication enabled in Azure SQL
Using Azure Policy to enforce Azure Active Directory only authentication with Azure SQL
Using Azure Policy to enforce Azure Active
Directory only authentication with Azure SQL
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance

NOTE
The Azure AD-only authentication and associated Azure Policy feature discussed in this article is in public preview .

This article guides you through creating an Azure Policy that would enforce Azure AD-only authentication when
users create an Azure SQL Managed Instance, or a logical server for Azure SQL Database. To learn more about
Azure AD-only authentication during resource creation, see Create server with Azure AD-only authentication
enabled in Azure SQL.
In this article, you learn how to:
Create an Azure Policy that enforces logical server or managed instance creation with Azure AD-only
authentication enabled
Check Azure Policy compliance

Prerequisite
Have permissions to manage Azure Policy. For more information, see Azure RBAC permissions in Azure
Policy.

Create an Azure Policy


Start off by creating an Azure Policy enforcing SQL Database or Managed Instance provisioning with Azure AD-
only authentication enabled.
1. Go to the Azure portal.
2. Search for the service Policy .
3. Under the Authoring settings, select Definitions .
4. In the Search box, search for Azure Active Directory only authentication.
There are two built-in policies available to enforce Azure AD-only authentication. One is for SQL
Database, and the other is for Managed Instance.
Azure SQL Database should have Azure Active Directory Only Authentication enabled
Azure SQL Managed Instance should have Azure Active Directory Only Authentication enabled
5. Select the policy name for your service. In this example, we'll use Azure SQL Database. Select Azure SQL
Database should have Azure Active Director y Only Authentication enabled .
6. Select Assign in the new menu.

NOTE
The JSON script in the menu shows the built-in policy definition that can be used as a template to build a custom
Azure Policy for SQL Database. The default is set to Audit .

7. In the Basics tab, add a Scope by using the selector (...) on the side of the box.

8. in the Scope pane, select your Subscription from the drop-down menu, and select a Resource Group
for this policy. Once you're done, use the Select button to save the selection.

NOTE
If you do not select a resource group, the policy will apply to the whole subscription.
9. Once you're back on the Basics tab, customize the Assignment name and provide an optional
Description . Make sure the Policy enforcement is Enabled .
10. Go over to the Parameters tab. Unselect the option Only show parameters that require input .
11. Under Effect , select Deny . This setting will prevent a logical server creation without Azure AD-only
authentication enabled.

12. In the Non-compliance messages tab, you can customize the policy message that displays if a
violation of the policy has occurred. The message will let users know what policy was enforced during
server creation.

13. Select Review + create . Review the policy and select the Create button.

NOTE
It may take some time for the newly created policy to be enforced.

Check policy compliance


You can check the Compliance setting under the Policy service to see the compliance state.
Search for the assignment name that you have given earlier to the policy.
Once the logical server is created with Azure AD-only authentication, the policy report will increase the counter
under the Resources by compliance state visual. You'll be able to see which resources are compliant, or non-
compliant.
If the resource group that the policy was chosen to cover contains already created servers, the policy report will
indicate those resources that are compliant and non-compliant.

NOTE
Updating the compliance report may take some time. Changes related to resource creation or Azure AD-only
authentication settings are not reported immediately.

Provision a server
You can then try to provision a logical server or managed instance in the resource group that you assigned the
Azure Policy. If Azure AD-only authentication is enabled during server creation, the provision will succeed. When
Azure AD-only authentication isn't enabled, the provision will fail.
For more information, see Create server with Azure AD-only authentication enabled in Azure SQL.

Next steps
Overview of Azure Policy for Azure AD-only authentication
Create server with Azure AD-only authentication enabled in Azure SQL
Overview of Azure AD-only authentication
Create server with Azure AD-only authentication
enabled in Azure SQL
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This how-to guide outlines the steps to create a logical server for Azure SQL Database or Azure SQL Managed
Instance with Azure AD-only authentication enabled during provisioning. The Azure AD-only authentication
feature prevents users from connecting to the server or managed instance using SQL authentication, and only
allows connection using Azure AD authentication.

Prerequisites
Version 2.26.1 or later is needed when using The Azure CLI. For more information on the installation and the
latest version, see Install the Azure CLI.
Az 6.1.0 module or higher is needed when using PowerShell.
If you're provisioning a managed instance using the Azure CLI, PowerShell, or REST API, a virtual network and
subnet needs to be created before you begin. For more information, see Create a virtual network for Azure
SQL Managed Instance.

Permissions
To provision a logical server or managed instance, you'll need to have the appropriate permissions to create
these resources. Azure users with higher permissions, such as subscription Owners, Contributors, Service
Administrators, and Co-Administrators have the privilege to create a SQL server or managed instance. To create
these resources with the least privileged Azure RBAC role, use the SQL Server Contributor role for SQL
Database and SQL Managed Instance Contributor role for SQL Managed Instance.
The SQL Security Manager Azure RBAC role doesn't have enough permissions to create a server or instance
with Azure AD-only authentication enabled. The SQL Security Manager role will be required to manage the
Azure AD-only authentication feature after server or instance creation.

Provision with Azure AD-only authentication enabled


The following section provides you with examples and scripts on how to create a logical server or managed
instance with an Azure AD admin set for the server or instance, and have Azure AD-only authentication enabled
during server creation. For more information on the feature, see Azure AD-only authentication.
In our examples, we're enabling Azure AD-only authentication during server or managed instance creation, with
a system assigned server admin and password. This will prevent server admin access when Azure AD-only
authentication is enabled, and only allows the Azure AD admin to access the resource. It's optional to add
parameters to the APIs to include your own server admin and password during server creation. However, the
password can't be reset until you disable Azure AD-only authentication. An example of how to use these
optional parameters to specify the server admin login name is presented in the PowerShell tab on this page.
NOTE
To change the existing properties after server or managed instance creation, other existing APIs should be used. For more
information, see Managing Azure AD-only authentication using APIs and Configure and manage Azure AD authentication
with Azure SQL.
If Azure AD-only authentication is set to false, which it is by default, a server admin and password will need to be included
in all APIs during server or managed instance creation.

Azure SQL Database


Portal
The Azure CLI
PowerShell
REST API
ARM Template

1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name , enter a name for your database.
7. For Ser ver , select Create new , and fill out the new server form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers in
Azure, not just unique within a subscription. Enter a value, and the Azure portal will let you know if it's
available or not.
Location : Select a location from the dropdown list
Authentication method : Select Use only Azure Active Director y (Azure AD) authentication .
Select Set admin , which brings up a menu to select an Azure AD principal as your logical server
Azure AD administrator. When you're finished, use the Select button to set your admin.
8. Select Next: Networking at the bottom of the page.
9. On the Networking tab, for Connectivity method , select Public endpoint .
10. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
11. Leave Connection policy and Minimum TLS version settings as their default value.
12. Select Next: Security at the bottom of the page. Configure any of the settings for Microsoft Defender
for SQL , Ledger , Identity , and Transparent data encr yption for your environment. You can also skip
these settings.

NOTE
Using a user-assigned managed identity (UMI) is not supported with Azure AD-only authentication. Do not set
the server identity in the Identity section as a UMI.

13. Select Review + create at the bottom of the page.


14. On the Review + create page, after reviewing, select Create .

Azure SQL Managed Instance


Portal
The Azure CLI
PowerShell
REST API
ARM Template
1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL managed instances , leave Resource type set to Single instance , and select Create .
4. Fill out the mandatory information required on the Basics tab for Project details and Managed
Instance details . This is a minimum set of information required to provision a SQL Managed Instance.

For more information on the configuration options, see Quickstart: Create an Azure SQL Managed
Instance.
5. Under Authentication , select Use only Azure Active Director y (Azure AD) authentication for the
Authentication method .
6. Select Set admin , which brings up a menu to select an Azure AD principal as your managed instance
Azure AD administrator. When you're finished, use the Select button to set your admin.
7. You can leave the rest of the settings default. For more information on the Networking , Security , or
other tabs and settings, follow the guide in the article Quickstart: Create an Azure SQL Managed Instance.
8. Once you're done with configuring your settings, select Review + create to proceed. Select Create to
start provisioning the managed instance.
Grant Directory Readers permissions
Once the deployment is complete for your managed instance, you may notice that the SQL Managed Instance
needs Read permissions to access Azure Active Directory. Read permissions can be granted by clicking on the
displayed message in the Azure portal by a person with enough privileges. For more information, see Directory
Readers role in Azure Active Directory for Azure SQL.

Limitations
To reset the server administrator password, Azure AD-only authentication must be disabled.
If Azure AD-only authentication is disabled, you must create a server with a server admin and password
when using all APIs.

Next steps
If you already have a SQL server or managed instance, and just want to enable Azure AD-only authentication,
see Tutorial: Enable Azure Active Directory only authentication with Azure SQL.
For more information on the Azure AD-only authentication feature, see Azure AD-only authentication with
Azure SQL.
If you're looking to enforce server creation with Azure AD-only authentication enabled, see Azure Policy for
Azure Active Directory only authentication with Azure SQL
Tutorial: Create and utilize Azure Active Directory
server logins
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
(dedicated SQL pools only)

NOTE
Azure Active Directory (Azure AD) server principals (logins) are currently in public preview for Azure SQL Database. Azure
SQL Managed Instance can already utilize Azure AD logins.

This article guides you through creating and utilizing Azure Active Directory (Azure AD) principals (logins) in the
virtual master database of Azure SQL.
In this tutorial, you learn how to:
Create an Azure AD login in the virtual master database with the new syntax extension for Azure SQL
Database
Create a user mapped to an Azure AD login in the virtual master database
Grant server roles to an Azure AD user
Disable an Azure AD login

Prerequisites
A SQL Database or SQL Managed Instance with a database. See Quickstart: Create an Azure SQL Database
single database if you haven't already created an Azure SQL Database, or Quickstart: Create an Azure SQL
Managed Instance.
Azure AD authentication set up for SQL Database or Managed Instance. For more information, see Configure
and manage Azure AD authentication with Azure SQL.
This article instructs you on creating an Azure AD login and user within the virtual master database. Only an
Azure AD admin can create a user within the virtual master database, so we recommend you use the Azure
AD admin account when going through this tutorial. An Azure AD principal with the loginmanager role can
create a login, but not a user within the virtual master database.

Create Azure AD login


1. Create an Azure SQL Database login for an Azure AD account. In our example, we'll use bob@contoso.com
that exists in our Azure AD domain called contoso . A login can also be created from an Azure AD group
or service principal (applications). For example, mygroup that is an Azure AD group consisting of Azure
AD accounts that are a member of that group. For more information, see CREATE LOGIN (Transact-SQL).

NOTE
The first Azure AD login must be created by the Azure Active Directory admin. A SQL login cannot create Azure
AD logins.

2. Using SQL Server Management Studio (SSMS), log into your SQL Database with the Azure AD admin
account set up for the server.
3. Run the following query:

Use master
CREATE LOGIN [bob@contoso.com] FROM EXTERNAL PROVIDER
GO

4. Check the created login in sys.server_principals . Execute the following query:

SELECT name, type_desc, type, is_disabled


FROM sys.server_principals
WHERE type_desc like 'external%'

You would see a similar output to the following:

Name type_desc type is_disabled


bob@contoso.com EXTERNAL_LOGIN E 0

5. The login bob@contoso.com has been created in the virtual master database.

Create user from an Azure AD login


1. Now that we've created an Azure AD login, we can create a database-level Azure AD user that is mapped
to the Azure AD login in the virtual master database. We'll continue to use our example, bob@contoso.com
to create a user in the virtual master database, as we want to demonstrate adding the user to special
roles. Only an Azure AD admin or SQL server admin can create users in the virtual master database.
2. We're using the virtual master database, but you can switch to a database of your choice if you want to
create users in other databases. Run the following query.

Use master
CREATE USER [bob@contoso.com] FROM LOGIN [bob@contoso.com]

TIP
Although it is not required to use Azure AD user aliases (for example, bob@contoso.com ), it is a recommended
best practice to use the same alias for Azure AD users and Azure AD logins.

3. Check the created user in sys.database_principals . Execute the following query:

SELECT name, type_desc, type


FROM sys.database_principals
WHERE type_desc like 'external%'

You would see a similar output to the following:

Name type_desc type


bob@contoso.com EXTERNAL_USER E
NOTE
The existing syntax to create an Azure AD user without an Azure AD login is still supported, and requires the creation of a
contained user inside SQL Database (without login).
For example, CREATE USER [bob@contoso.com] FROM EXTERNAL PROVIDER .

Grant server-level roles to Azure AD logins


You can add logins to the fixed server-level roles, such as the ##MS_DefinitionReader## ,
##MS_Ser verStateReader## , or ##MS_Ser verStateManager## role.

NOTE
The server-level roles mentioned here are not supported for Azure AD groups.

ALTER SERVER ROLE ##MS_DefinitionReader## ADD MEMBER [AzureAD_object];

ALTER SERVER ROLE ##MS_ServerStateReader## ADD MEMBER [AzureAD_object];

ALTER SERVER ROLE ##MS_ServerStateManager## ADD MEMBER [AzureAD_object];

Permissions aren't effective until the user reconnects. Flush the DBCC cache as well:

DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS

To check which Azure AD logins are part of server-level roles, run the following query:

SELECT roles.principal_id AS RolePID,roles.name AS RolePName,


server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.server_principals AS members
ON server_role_members.member_principal_id = members.principal_id;

Grant special roles for Azure AD users


Special roles for SQL Database can be assigned to users in the virtual master database.
In order to grant one of the special database roles to a user, the user must exist in the virtual master database.
To add a user to a role, you can run the following query:

ALTER ROLE [dbamanger] ADD MEMBER [AzureAD_object]

To remove a user from a role, run the following query:


ALTER ROLE [dbamanger] DROP MEMBER [AzureAD_object]

AzureAD_object can be an Azure AD user, group, or service principal in Azure AD.


In our example, we created the user bob@contoso.com . Let's give the user the dbmanager and loginmanager
roles.
1. Run the following query:

ALTER ROLE [dbamanger] ADD MEMBER [bob@contoso.com]


ALTER ROLE [loginmanager] ADD MEMBER [bob@contoso.com]

2. Check the database role assignment by running the following query:

SELECT DP1.name AS DatabaseRoleName,


isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'and DP2.name like 'bob%'

You would see a similar output to the following:

DatabaseRoleName DatabaseUserName
dbmanager bob@contoso.com
loginmanager bob@contoso.com

Optional - Disable a login


The ALTER LOGIN (Transact-SQL) DDL syntax can be used to enable or disable an Azure AD login in Azure SQL
Database.

ALTER LOGIN [bob@contoso.com] DISABLE

For the DISABLE or ENABLE changes to take immediate effect, the authentication cache and the
TokenAndPermUserStore cache must be cleared using the following T-SQL commands:

DBCC FLUSHAUTHCACHE
DBCC FREESYSTEMCACHE('TokenAndPermUserStore') WITH NO_INFOMSGS

Check that the login has been disabled by executing the following query:

SELECT name, type_desc, type


FROM sys.server_principals
WHERE is_disabled = 1

A use case for this would be to allow read-only on geo-replicas, but deny connection on a primary server.

See also
For more information and examples, see:
Azure Active Directory server principals
CREATE LOGIN (Transact-SQL)
CREATE USER (Transact-SQL)
PowerShell and Azure CLI: Enable Transparent Data
Encryption with customer-managed key from Azure
Key Vault
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on
Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration -
Bring Your Own Key (BYOK) Support, visit TDE with customer-managed keys in Azure Key Vault.

NOTE
Azure SQL now supports using a RSA key stored in a Managed HSM as TDE Protector. Azure Key Vault Managed HSM is
a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard
cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about Managed
HSMs.

NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.

Prerequisites for PowerShell


You must have an Azure subscription and be an administrator on that subscription.
[Recommended but Optional] Have a hardware security module (HSM) or local key store for creating a local
copy of the TDE Protector key material.
You must have Azure PowerShell installed and running.
Create an Azure Key Vault and Key to use for TDE.
Instructions for using a hardware security module (HSM) and Key Vault
The key vault must have the following property to be used for TDE:
soft-delete and purge protection
The key must have the following attributes to be used for TDE:
No expiration date
Not disabled
Able to perform get, wrap key, unwrap key operations
To use a Managed HSM key, follow instructions to create and activate a Managed HSM using Azure CLI

PowerShell
The Azure CLI

For Az module installation instructions, see Install Azure PowerShell. For specific cmdlets, see AzureRM.Sql.
For specifics on Key Vault, see PowerShell instructions from Key Vault and How to use Key Vault soft-delete with
PowerShell.

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.

Assign an Azure Active Directory (Azure AD) identity to your server


If you have an existing server, use the following to add an Azure Active Directory (Azure AD) identity to your
server:

$server = Set-AzSqlServer -ResourceGroupName <SQLDatabaseResourceGroupName> -ServerName <LogicalServerName>


-AssignIdentity

If you are creating a server, use the New-AzSqlServer cmdlet with the tag -Identity to add an Azure AD identity
during server creation:

$server = New-AzSqlServer -ResourceGroupName <SQLDatabaseResourceGroupName> -Location <RegionName> `


-ServerName <LogicalServerName> -ServerVersion "12.0" -SqlAdministratorCredentials <PSCredential> -
AssignIdentity

Grant Key Vault permissions to your server


Use the Set-AzKeyVaultAccessPolicy cmdlet to grant your server access to the key vault before using a key from
it for TDE.

Set-AzKeyVaultAccessPolicy -VaultName <KeyVaultName> `


-ObjectId $server.Identity.PrincipalId -PermissionsToKeys get, wrapKey, unwrapKey

For adding permissions to your server on a Managed HSM, add the 'Managed HSM Crypto Service Encryption
User' local RBAC role to the server. This will enable the server to perform get, wrap key, unwrap key operations
on the keys in the Managed HSM. Instructions for provisioning server access on Managed HSM

Add the Key Vault key to the server and set the TDE Protector
Use the Get-AzKeyVaultKey cmdlet to retrieve the key ID from key vault
Use the Add-AzSqlServerKeyVaultKey cmdlet to add the key from the Key Vault to the server.
Use the Set-AzSqlServerTransparentDataEncryptionProtector cmdlet to set the key as the TDE protector for
all server resources.
Use the Get-AzSqlServerTransparentDataEncryptionProtector cmdlet to confirm that the TDE protector was
configured as intended.

NOTE
For Managed HSM keys, use Az.Sql 2.11.1 version of PowerShell.
NOTE
The combined length for the key vault name and key name cannot exceed 94 characters.

TIP
An example KeyId from Key Vault:
https://contosokeyvault.vault.azure.net/keys/Key1/1a1a2b2b3c3c4d4d5e5e6f6f7g7g8h8h

An example KeyId from Managed HSM:


https://contosoMHSM.managedhsm.azure.net/keys/myrsakey

# add the key from Key Vault to the server


Add-AzSqlServerKeyVaultKey -ResourceGroupName <SQLDatabaseResourceGroupName> -ServerName <LogicalServerName>
-KeyId <KeyVaultKeyId>

# set the key as the TDE protector for all resources under the server
Set-AzSqlServerTransparentDataEncryptionProtector -ResourceGroupName <SQLDatabaseResourceGroupName> -
ServerName <LogicalServerName> `
-Type AzureKeyVault -KeyId <KeyVaultKeyId>

# confirm the TDE protector was configured as intended


Get-AzSqlServerTransparentDataEncryptionProtector -ResourceGroupName <SQLDatabaseResourceGroupName> -
ServerName <LogicalServerName>

Turn on TDE
Use the Set-AzSqlDatabaseTransparentDataEncryption cmdlet to turn on TDE.

Set-AzSqlDatabaseTransparentDataEncryption -ResourceGroupName <SQLDatabaseResourceGroupName> `


-ServerName <LogicalServerName> -DatabaseName <DatabaseName> -State "Enabled"

Now the database or data warehouse has TDE enabled with an encryption key in Key Vault.

Check the encryption state and encryption activity


Use the Get-AzSqlDatabaseTransparentDataEncryption to get the encryption state and the Get-
AzSqlDatabaseTransparentDataEncryptionActivity to check the encryption progress for a database or data
warehouse.

# get the encryption state


Get-AzSqlDatabaseTransparentDataEncryption -ResourceGroupName <SQLDatabaseResourceGroupName> `
-ServerName <LogicalServerName> -DatabaseName <DatabaseName> `

# check the encryption progress for a database or data warehouse


Get-AzSqlDatabaseTransparentDataEncryptionActivity -ResourceGroupName <SQLDatabaseResourceGroupName> `
-ServerName <LogicalServerName> -DatabaseName <DatabaseName>

Useful PowerShell cmdlets


PowerShell
Azure CLI

Use the Set-AzSqlDatabaseTransparentDataEncryption cmdlet to turn off TDE.


Set-AzSqlDatabaseTransparentDataEncryption -ServerName <LogicalServerName> -ResourceGroupName
<SQLDatabaseResourceGroupName> `
-DatabaseName <DatabaseName> -State "Disabled"

Use the Get-AzSqlServerKeyVaultKey cmdlet to return the list of Key Vault keys added to the server.

# KeyId is an optional parameter, to return a specific key version


Get-AzSqlServerKeyVaultKey -ServerName <LogicalServerName> -ResourceGroupName
<SQLDatabaseResourceGroupName>

Use the Remove-AzSqlServerKeyVaultKey to remove a Key Vault key from the server.

# the key set as the TDE Protector cannot be removed


Remove-AzSqlServerKeyVaultKey -KeyId <KeyVaultKeyId> -ServerName <LogicalServerName> -
ResourceGroupName <SQLDatabaseResourceGroupName>

Troubleshooting
Check the following if an issue occurs:
If the key vault cannot be found, make sure you're in the right subscription.

PowerShell
Azure CLI

Get-AzSubscription -SubscriptionId <SubscriptionId>

If the new key cannot be added to the server, or the new key cannot be updated as the TDE Protector, check
the following:
The key should not have an expiration date
The key must have the get, wrap key, and unwrap key operations enabled.

Next steps
Learn how to rotate the TDE Protector of a server to comply with security requirements: Rotate the
Transparent Data Encryption protector Using PowerShell.
In case of a security risk, learn how to remove a potentially compromised TDE Protector: Remove a
potentially compromised key.
Configure Always Encrypted by using Azure Key
Vault
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article shows you how to secure sensitive data in a database in Azure SQL Database with data encryption
by using the Always Encrypted wizard in SQL Server Management Studio (SSMS). It also includes instructions
that will show you how to store each encryption key in Azure Key Vault.
Always Encrypted is a data encryption technology that helps protect sensitive data at rest on the server, during
movement between client and server, and while the data is in use. Always Encrypted ensures that sensitive data
never appears as plaintext inside the database system. After you configure data encryption, only client
applications or app servers that have access to the keys can access plaintext data. For detailed information, see
Always Encrypted (Database Engine).
After you configure the database to use Always Encrypted, you will create a client application in C# with Visual
Studio to work with the encrypted data.
Follow the steps in this article and learn how to set up Always Encrypted for your database in Azure SQL
Database or SQL Managed Instance. In this article you will learn how to perform the following tasks:
Use the Always Encrypted wizard in SSMS to create Always Encrypted keys.
Create a column master key (CMK).
Create a column encryption key (CEK).
Create a database table and encrypt columns.
Create an application that inserts, selects, and displays data from the encrypted columns.

Prerequisites
An Azure account and subscription. If you don't have one, sign up for a free trial.
A database in Azure SQL Database or Azure SQL Managed Instance.
SQL Server Management Studio version 13.0.700.242 or later.
.NET Framework 4.6 or later (on the client computer).
Visual Studio.
Azure PowerShell or Azure CLI

Enable client application access


You must enable your client application to access your database in SQL Database by setting up an Azure Active
Directory (Azure AD) application and copying the Application ID and key that you will need to authenticate your
application.
To get the Application ID and key, follow the steps in create an Azure Active Directory application and service
principal that can access resources.

Create a key vault to store your keys


Now that your client app is configured and you have your application ID, it's time to create a key vault and
configure its access policy so you and your application can access the vault's secrets (the Always Encrypted
keys). The create, get, list, sign, verify, wrapKey, and unwrapKey permissions are required for creating a new
column master key and for setting up encryption with SQL Server Management Studio.
You can quickly create a key vault by running the following script. For a detailed explanation of these commands
and more information about creating and configuring a key vault, see What is Azure Key Vault?.
PowerShell
Azure CLI

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Database, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.

$subscriptionName = '<subscriptionName>'
$userPrincipalName = '<username@domain.com>'
$applicationId = '<applicationId from AAD application>'
$resourceGroupName = '<resourceGroupName>' # use the same resource group name when creating your SQL
Database below
$location = '<datacenterLocation>'
$vaultName = '<vaultName>'

Connect-AzAccount
$subscriptionId = (Get-AzSubscription -SubscriptionName $subscriptionName).Id
Set-AzContext -SubscriptionId $subscriptionId

New-AzResourceGroup -Name $resourceGroupName -Location $location


New-AzKeyVault -VaultName $vaultName -ResourceGroupName $resourceGroupName -Location $location

Set-AzKeyVaultAccessPolicy -VaultName $vaultName -ResourceGroupName $resourceGroupName -PermissionsToKeys


create,get,wrapKey,unwrapKey,sign,verify,list -UserPrincipalName $userPrincipalName
Set-AzKeyVaultAccessPolicy -VaultName $vaultName -ResourceGroupName $resourceGroupName -
ServicePrincipalName $applicationId -PermissionsToKeys get,wrapKey,unwrapKey,sign,verify,list

Connect with SSMS


Open SQL Server Management Studio (SSMS) and connect to the server or managed with your database.
1. Open SSMS. (Go to Connect > Database Engine to open the Connect to Ser ver window if it isn't
open.)
2. Enter your server name or instance name and credentials.
If the New Firewall Rule window opens, sign in to Azure and let SSMS create a new firewall rule for you.

Create a table
In this section, you will create a table to hold patient data. It's not initially encrypted--you will configure
encryption in the next section.
1. Expand Databases .
2. Right-click the database and click New Quer y .
3. Paste the following Transact-SQL (T-SQL) into the new query window and Execute it.

CREATE TABLE [dbo].[Patients](


[PatientId] [int] IDENTITY(1,1),
[SSN] [char](11) NOT NULL,
[FirstName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NULL,
[MiddleName] [nvarchar](50) NULL,
[StreetAddress] [nvarchar](50) NULL,
[City] [nvarchar](50) NULL,
[ZipCode] [char](5) NULL,
[State] [char](2) NULL,
[BirthDate] [date] NOT NULL
PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] );
GO

Encrypt columns (configure Always Encrypted)


SSMS provides a wizard that helps you easily configure Always Encrypted by setting up the column master key,
column encryption key, and encrypted columns for you.
1. Expand Databases > Clinic > Tables .
2. Right-click the Patients table and select Encr ypt Columns to open the Always Encrypted wizard:
The Always Encrypted wizard includes the following sections: Column Selection , Master Key Configuration ,
Validation , and Summar y .
Column Selection
Click Next on the Introduction page to open the Column Selection page. On this page, you will select which
columns you want to encrypt, the type of encryption, and what column encryption key (CEK) to use.
Encrypt SSN and Bir thDate information for each patient. The SSN column will use deterministic encryption,
which supports equality lookups, joins, and group by. The BirthDate column will use randomized encryption,
which does not support operations.
Set the Encr yption Type for the SSN column to Deterministic and the BirthDate column to Randomized .
Click Next .
Master Key Configuration
The Master Key Configuration page is where you set up your CMK and select the key store provider where
the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a
hardware security module (HSM).
This tutorial shows how to store your keys in Azure Key Vault.
1. Select Azure Key Vault .
2. Select the desired key vault from the drop-down list.
3. Click Next .
Validation
You can encrypt the columns now or save a PowerShell script to run later. For this tutorial, select Proceed to
finish now and click Next .
Summary
Verify that the settings are all correct and click Finish to complete the setup for Always Encrypted.
Verify the wizard's actions
After the wizard is finished, your database is set up for Always Encrypted. The wizard performed the following
actions:
Created a column master key and stored it in Azure Key Vault.
Created a column encryption key and stored it in Azure Key Vault.
Configured the selected columns for encryption. The Patients table currently has no data, but any existing
data in the selected columns is now encrypted.
You can verify the creation of the keys in SSMS by expanding Clinic > Security > Always Encr ypted Keys .

Create a client application that works with the encrypted data


Now that Always Encrypted is set up, you can build an application that performs inserts and selects on the
encrypted columns.

IMPORTANT
Your application must use SqlParameter objects when passing plaintext data to the server with Always Encrypted columns.
Passing literal values without using SqlParameter objects will result in an exception.

1. Open Visual Studio and create a new C# Console Application (Visual Studio 2015 and earlier) or Console
App (.NET Framework) (Visual Studio 2017 and later). Make sure your project is set to .NET Framework
4.6 or later.
2. Name the project AlwaysEncr yptedConsoleAKVApp and click OK .
3. Install the following NuGet packages by going to Tools > NuGet Package Manager > Package Manager
Console .
Run these two lines of code in the Package Manager Console:

Install-Package Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider
Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory

Modify your connection string to enable Always Encrypted


This section explains how to enable Always Encrypted in your database connection string.
To enable Always Encrypted, you need to add the Column Encr yption Setting keyword to your connection
string and set it to Enabled .
You can set this directly in the connection string, or you can set it by using SqlConnectionStringBuilder. The
sample application in the next section shows how to use SqlConnectionStringBuilder .
Enable Always Encrypted in the connection string
Add the following keyword to your connection string.
Column Encryption Setting=Enabled

Enable Always Encrypted with SqlConnectionStringBuilder


The following code shows how to enable Always Encrypted by setting
SqlConnectionStringBuilder.ColumnEncryptionSetting to Enabled.

// Instantiate a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder = new SqlConnectionStringBuilder("replace with your connection
string");

// Enable Always Encrypted.


connStringBuilder.ColumnEncryptionSetting = SqlConnectionColumnEncryptionSetting.Enabled;

Register the Azure Key Vault provider


The following code shows how to register the Azure Key Vault provider with the ADO.NET driver.

private static ClientCredential _clientCredential;

static void InitializeAzureKeyVaultProvider() {


_clientCredential = new ClientCredential(applicationId, clientKey);

SqlColumnEncryptionAzureKeyVaultProvider azureKeyVaultProvider = new


SqlColumnEncryptionAzureKeyVaultProvider(GetToken);

Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providers = new Dictionary<string,


SqlColumnEncryptionKeyStoreProvider>();

providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
}

Always Encrypted sample console application


This sample demonstrates how to:
Modify your connection string to enable Always Encrypted.
Register Azure Key Vault as the application's key store provider.
Insert data into the encrypted columns.
Select a record by filtering for a specific value in an encrypted column.
Replace the contents of Program.cs with the following code. Replace the connection string for the global
connectionString variable in the line that directly precedes the Main method with your valid connection string
from the Azure portal. This is the only change you need to make to this code.
Run the app to see Always Encrypted in action.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data;
using System.Data.SqlClient;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Microsoft.SqlServer.Management.AlwaysEncrypted.AzureKeyVaultProvider;

namespace AlwaysEncryptedConsoleAKVApp {
class Program {
// Update this line with your Clinic database connection string from the Azure portal.
static string connectionString = @"<connection string from the portal>";
static string applicationId = @"<application ID from your AAD application>";
static string clientKey = "<key from your AAD application>";

static void Main(string[] args) {


InitializeAzureKeyVaultProvider();

Console.WriteLine("Signed in as: " + _clientCredential.ClientId);

Console.WriteLine("Original connection string copied from the Azure portal:");


Console.WriteLine(connectionString);

// Create a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder(connectionString);

// Enable Always Encrypted for the connection.


// This is the only change specific to Always Encrypted
connStringBuilder.ColumnEncryptionSetting =
SqlConnectionColumnEncryptionSetting.Enabled;

Console.WriteLine(Environment.NewLine + "Updated connection string with Always Encrypted


enabled:");
Console.WriteLine(connStringBuilder.ConnectionString);

// Update the connection string with a password supplied at runtime.


Console.WriteLine(Environment.NewLine + "Enter server password:");
connStringBuilder.Password = Console.ReadLine();

// Assign the updated connection string to our global variable.


connectionString = connStringBuilder.ConnectionString;

// Delete all records to restart this demo app.


ResetPatientsTable();

// Add sample data to the Patients table.


Console.Write(Environment.NewLine + "Adding sample patient data to the database...");

InsertPatient(new Patient() {
SSN = "999-99-0001",
FirstName = "Orlando",
LastName = "Gee",
LastName = "Gee",
BirthDate = DateTime.Parse("01/04/1964")
});
InsertPatient(new Patient() {
SSN = "999-99-0002",
FirstName = "Keith",
LastName = "Harris",
BirthDate = DateTime.Parse("06/20/1977")
});
InsertPatient(new Patient() {
SSN = "999-99-0003",
FirstName = "Donna",
LastName = "Carreras",
BirthDate = DateTime.Parse("02/09/1973")
});
InsertPatient(new Patient() {
SSN = "999-99-0004",
FirstName = "Janet",
LastName = "Gates",
BirthDate = DateTime.Parse("08/31/1985")
});
InsertPatient(new Patient() {
SSN = "999-99-0005",
FirstName = "Lucy",
LastName = "Harrington",
BirthDate = DateTime.Parse("05/06/1993")
});

// Fetch and display all patients.


Console.WriteLine(Environment.NewLine + "All the records currently in the Patients table:");

foreach (Patient patient in SelectAllPatients()) {


Console.WriteLine(patient.FirstName + " " + patient.LastName + "\tSSN: " + patient.SSN +
"\tBirthdate: " + patient.BirthDate);
}

// Get patients by SSN.


Console.WriteLine(Environment.NewLine + "Now lets locate records by searching the encrypted SSN
column.");

string ssn;

// This very simple validation only checks that the user entered 11 characters.
// In production be sure to check all user input and use the best validation for your specific
application.
do {
Console.WriteLine("Please enter a valid SSN (ex. 999-99-0003):");
ssn = Console.ReadLine();
} while (ssn.Length != 11);

// The example allows duplicate SSN entries so we will return all records
// that match the provided value and store the results in selectedPatients.
Patient selectedPatient = SelectPatientBySSN(ssn);

// Check if any records were returned and display our query results.
if (selectedPatient != null) {
Console.WriteLine("Patient found with SSN = " + ssn);
Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
+ selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
}
else {
Console.WriteLine("No patients found with SSN = " + ssn);
}

Console.WriteLine("Press Enter to exit...");


Console.ReadLine();
}

private static ClientCredential _clientCredential;


static void InitializeAzureKeyVaultProvider() {
_clientCredential = new ClientCredential(applicationId, clientKey);

SqlColumnEncryptionAzureKeyVaultProvider azureKeyVaultProvider =
new SqlColumnEncryptionAzureKeyVaultProvider(GetToken);

Dictionary<string, SqlColumnEncryptionKeyStoreProvider> providers =


new Dictionary<string, SqlColumnEncryptionKeyStoreProvider>();

providers.Add(SqlColumnEncryptionAzureKeyVaultProvider.ProviderName, azureKeyVaultProvider);
SqlConnection.RegisterColumnEncryptionKeyStoreProviders(providers);
}

public async static Task<string> GetToken(string authority, string resource, string scope) {
var authContext = new AuthenticationContext(authority);
AuthenticationResult result = await authContext.AcquireTokenAsync(resource, _clientCredential);

if (result == null)
throw new InvalidOperationException("Failed to obtain the access token");
return result.AccessToken;
}

static int InsertPatient(Patient newPatient) {


int returnValue = 0;

string sqlCmdText = @"INSERT INTO [dbo].[Patients] ([SSN], [FirstName], [LastName], [BirthDate])


VALUES (@SSN, @FirstName, @LastName, @BirthDate);";

SqlCommand sqlCmd = new SqlCommand(sqlCmdText);

SqlParameter paramSSN = new SqlParameter(@"@SSN", newPatient.SSN);


paramSSN.DbType = DbType.AnsiStringFixedLength;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Size = 11;

SqlParameter paramFirstName = new SqlParameter(@"@FirstName", newPatient.FirstName);


paramFirstName.DbType = DbType.String;
paramFirstName.Direction = ParameterDirection.Input;

SqlParameter paramLastName = new SqlParameter(@"@LastName", newPatient.LastName);


paramLastName.DbType = DbType.String;
paramLastName.Direction = ParameterDirection.Input;

SqlParameter paramBirthDate = new SqlParameter(@"@BirthDate", newPatient.BirthDate);


paramBirthDate.SqlDbType = SqlDbType.Date;
paramBirthDate.Direction = ParameterDirection.Input;

sqlCmd.Parameters.Add(paramSSN);
sqlCmd.Parameters.Add(paramFirstName);
sqlCmd.Parameters.Add(paramLastName);
sqlCmd.Parameters.Add(paramBirthDate);

using (sqlCmd.Connection = new SqlConnection(connectionString)) {


try {
sqlCmd.Connection.Open();
sqlCmd.ExecuteNonQuery();
}
catch (Exception ex) {
returnValue = 1;
Console.WriteLine("The following error was encountered: ");
Console.WriteLine(ex.Message);
Console.WriteLine(Environment.NewLine + "Press Enter key to exit");
Console.ReadLine();
Environment.Exit(0);
}
}
return returnValue;
}
static List<Patient> SelectAllPatients() {
List<Patient> patients = new List<Patient>();

SqlCommand sqlCmd = new SqlCommand(


"SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients]",
new SqlConnection(connectionString));

using (sqlCmd.Connection = new SqlConnection(connectionString))

using (sqlCmd.Connection = new SqlConnection(connectionString)) {


try {
sqlCmd.Connection.Open();
SqlDataReader reader = sqlCmd.ExecuteReader();

if (reader.HasRows) {
while (reader.Read()) {
patients.Add(new Patient() {
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
});
}
}
}
catch (Exception ex) {
throw;
}
}

return patients;
}

static Patient SelectPatientBySSN(string ssn) {


Patient patient = new Patient();

SqlCommand sqlCmd = new SqlCommand(


"SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients] WHERE [SSN]=@SSN",
new SqlConnection(connectionString));

SqlParameter paramSSN = new SqlParameter(@"@SSN", ssn);


paramSSN.DbType = DbType.AnsiStringFixedLength;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Size = 11;

sqlCmd.Parameters.Add(paramSSN);

using (sqlCmd.Connection = new SqlConnection(connectionString)) {


try {
sqlCmd.Connection.Open();
SqlDataReader reader = sqlCmd.ExecuteReader();

if (reader.HasRows) {
while (reader.Read()) {
patient = new Patient() {
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
};
}
}
else {
patient = null;
}
}
catch (Exception ex) {
throw;
}
}
return patient;
}

// This method simply deletes all records in the Patients table to reset our demo.
static int ResetPatientsTable() {
int returnValue = 0;

SqlCommand sqlCmd = new SqlCommand("DELETE FROM Patients");


using (sqlCmd.Connection = new SqlConnection(connectionString)) {
try {
sqlCmd.Connection.Open();
sqlCmd.ExecuteNonQuery();

}
catch (Exception ex) {
returnValue = 1;
}
}
return returnValue;
}
}

class Patient {
public string SSN { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
}

Verify that the data is encrypted


You can quickly check that the actual data on the server is encrypted by querying the Patients data with SSMS
(using your current connection where Column Encr yption Setting is not yet enabled).
Run the following query on the Clinic database.

SELECT FirstName, LastName, SSN, BirthDate FROM Patients;

You can see that the encrypted columns do not contain any plaintext data.

To use SSMS to access the plaintext data, you first need to ensure that the user has proper permissions to the
Azure Key Vault: get, unwrapKey, and verify. For detailed information, see Create and Store Column Master Keys
(Always Encrypted).
Then add the Column Encryption Setting=enabled parameter during your connection.
1. In SSMS, right-click your server in Object Explorer and choose Disconnect .
2. Click Connect > Database Engine to open the Connect to Ser ver window and click Options .
3. Click Additional Connection Parameters and type Column Encr yption Setting=enabled .

4. Run the following query on the Clinic database.

SELECT FirstName, LastName, SSN, BirthDate FROM Patients;

You can now see the plaintext data in the encrypted columns.

Next steps
After your database is configured to use Always Encrypted, you may want to do the following:
Rotate and clean up your keys.
Migrate data that is already encrypted with Always Encrypted.

Related information
Always Encrypted (client development)
Transparent data encryption
SQL Server encryption
Always Encrypted wizard
Always Encrypted blog
Configure Always Encrypted by using the Windows
certificate store
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article shows you how to secure sensitive data in Azure SQL Database or Azure SQL Managed Instance with
database encryption by using the Always Encrypted wizard in SQL Server Management Studio (SSMS). It also
shows you how to store your encryption keys in the Windows certificate store.
Always Encrypted is a data encryption technology that helps protect sensitive data at rest on the server, during
movement between client and server, and while the data is in use, ensuring that sensitive data never appears as
plaintext inside the database system. After you encrypt data, only client applications or app servers that have
access to the keys can access plaintext data. For detailed information, see Always Encrypted (Database Engine).
After configuring the database to use Always Encrypted, you will create a client application in C# with Visual
Studio to work with the encrypted data.
Follow the steps in this article to learn how to set up Always Encrypted for SQL Database or SQL Managed
Instance. In this article, you will learn how to perform the following tasks:
Use the Always Encrypted wizard in SSMS to create Always Encrypted Keys.
Create a Column Master Key (CMK).
Create a Column Encryption Key (CEK).
Create a database table and encrypt columns.
Create an application that inserts, selects, and displays data from the encrypted columns.

Prerequisites
For this tutorial, you'll need:
An Azure account and subscription. If you don't have one, sign up for a free trial.
A database in Azure SQL Database or Azure SQL Managed Instance.
SQL Server Management Studio version 13.0.700.242 or later.
.NET Framework 4.6 or later (on the client computer).
Visual Studio.

Enable client application access


You must enable your client application to access SQL Database or SQL Managed Instance by setting up an
Azure Active Directory (AAD) application and copying the Application ID and key that you will need to
authenticate your application.
To get the Application ID and key, follow the steps in create an Azure Active Directory application and service
principal that can access resources.

Connect with SSMS


Open SQL Server Management Studio (SSMS) and connect to the server or managed with your database.
1. Open SSMS. (Click Connect > Database Engine to open the Connect to Ser ver window if it is not
open).
2. Enter your server name and credentials.

If the New Firewall Rule window opens, sign in to Azure and let SSMS create a new firewall rule for you.

Create a table
In this section, you will create a table to hold patient data. This will be a normal table initially--you will configure
encryption in the next section.
1. Expand Databases .
2. Right-click the Clinic database and click New Quer y .
3. Paste the following Transact-SQL (T-SQL) into the new query window and Execute it.

CREATE TABLE [dbo].[Patients](


[PatientId] [int] IDENTITY(1,1),
[SSN] [char](11) NOT NULL,
[FirstName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NULL,
[MiddleName] [nvarchar](50) NULL,
[StreetAddress] [nvarchar](50) NULL,
[City] [nvarchar](50) NULL,
[ZipCode] [char](5) NULL,
[State] [char](2) NULL,
[BirthDate] [date] NOT NULL
PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] );
GO

Encrypt columns (configure Always Encrypted)


SSMS provides a wizard to easily configure Always Encrypted by setting up the CMK, CEK, and encrypted
columns for you.
1. Expand Databases > Clinic > Tables .
2. Right-click the Patients table and select Encr ypt Columns to open the Always Encrypted wizard:
The Always Encrypted wizard includes the following sections: Column Selection , Master Key Configuration
(CMK), Validation , and Summar y .
Column Selection
Click Next on the Introduction page to open the Column Selection page. On this page, you will select which
columns you want to encrypt, the type of encryption, and what column encryption key (CEK) to use.
Encrypt SSN and Bir thDate information for each patient. The SSN column will use deterministic encryption,
which supports equality lookups, joins, and group by. The Bir thDate column will use randomized encryption,
which does not support operations.
Set the Encr yption Type for the SSN column to Deterministic and the Bir thDate column to Randomized .
Click Next .
Master Key Configuration
The Master Key Configuration page is where you set up your CMK and select the key store provider where
the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a
hardware security module (HSM). This tutorial shows how to store your keys in the Windows certificate store.
Verify that Windows cer tificate store is selected and click Next .
Validation
You can encrypt the columns now or save a PowerShell script to run later. For this tutorial, select Proceed to
finish now and click Next .
Summary
Verify that the settings are all correct and click Finish to complete the setup for Always Encrypted.
Verify the wizard's actions
After the wizard is finished, your database is set up for Always Encrypted. The wizard performed the following
actions:
Created a CMK.
Created a CEK.
Configured the selected columns for encryption. Your Patients table currently has no data, but any existing
data in the selected columns is now encrypted.
You can verify the creation of the keys in SSMS by going to Clinic > Security > Always Encr ypted Keys . You
can now see the new keys that the wizard generated for you.

Create a client application that works with the encrypted data


Now that Always Encrypted is set up, you can build an application that performs inserts and selects on the
encrypted columns. To successfully run the sample application, you must run it on the same computer where
you ran the Always Encrypted wizard. To run the application on another computer, you must deploy your Always
Encrypted certificates to the computer running the client app.

IMPORTANT
Your application must use SqlParameter objects when passing plaintext data to the server with Always Encrypted columns.
Passing literal values without using SqlParameter objects will result in an exception.

1. Open Visual Studio and create a new C# console application. Make sure your project is set to .NET
Framework 4.6 or later.
2. Name the project AlwaysEncr yptedConsoleApp and click OK .

Modify your connection string to enable Always Encrypted


This section explains how to enable Always Encrypted in your database connection string. You will modify the
console app you just created in the next section, "Always Encrypted sample console application."
To enable Always Encrypted, you need to add the Column Encr yption Setting keyword to your connection
string and set it to Enabled .
You can set this directly in the connection string, or you can set it by using a SqlConnectionStringBuilder. The
sample application in the next section shows how to use SqlConnectionStringBuilder .

NOTE
This is the only change required in a client application specific to Always Encrypted. If you have an existing application that
stores its connection string externally (that is, in a config file), you might be able to enable Always Encrypted without
changing any code.

Enable Always Encrypted in the connection string


Add the following keyword to your connection string:
Column Encryption Setting=Enabled

Enable Always Encrypted with a SqlConnectionStringBuilder


The following code shows how to enable Always Encrypted by setting the
SqlConnectionStringBuilder.ColumnEncryptionSetting to Enabled.

// Instantiate a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder("replace with your connection string");

// Enable Always Encrypted.


connStringBuilder.ColumnEncryptionSetting =
SqlConnectionColumnEncryptionSetting.Enabled;

Always Encrypted sample console application


This sample demonstrates how to:
Modify your connection string to enable Always Encrypted.
Insert data into the encrypted columns.
Select a record by filtering for a specific value in an encrypted column.
Replace the contents of Program.cs with the following code. Replace the connection string for the global
connectionString variable in the line directly above the Main method with your valid connection string from the
Azure portal. This is the only change you need to make to this code.
Run the app to see Always Encrypted in action.

using System;
using System.Collections.Generic;
using System.Data;
using System.Data.SqlClient;
using System.Globalization;

namespace AlwaysEncryptedConsoleApp
{
class Program
{
// Update this line with your Clinic database connection string from the Azure portal.
static string connectionString = @"Data Source = SPE-T640-01.sys-sqlsvr.local; Initial Catalog =
Clinic; Integrated Security = true";

static void Main(string[] args)


{
Console.WriteLine("Original connection string copied from the Azure portal:");
Console.WriteLine(connectionString);

// Create a SqlConnectionStringBuilder.
SqlConnectionStringBuilder connStringBuilder =
new SqlConnectionStringBuilder(connectionString);

// Enable Always Encrypted for the connection.


// This is the only change specific to Always Encrypted
connStringBuilder.ColumnEncryptionSetting =
SqlConnectionColumnEncryptionSetting.Enabled;

Console.WriteLine(Environment.NewLine + "Updated connection string with Always Encrypted


enabled:");
Console.WriteLine(connStringBuilder.ConnectionString);

// Update the connection string with a password supplied at runtime.


Console.WriteLine(Environment.NewLine + "Enter server password:");
connStringBuilder.Password = Console.ReadLine();

// Assign the updated connection string to our global variable.


connectionString = connStringBuilder.ConnectionString;

// Delete all records to restart this demo app.


ResetPatientsTable();

// Add sample data to the Patients table.


Console.Write(Environment.NewLine + "Adding sample patient data to the database...");

CultureInfo culture = CultureInfo.CreateSpecificCulture("en-US");


InsertPatient(new Patient()
{
SSN = "999-99-0001",
FirstName = "Orlando",
LastName = "Gee",
BirthDate = DateTime.Parse("01/04/1964", culture)
});
InsertPatient(new Patient()
{
SSN = "999-99-0002",
FirstName = "Keith",
LastName = "Harris",
BirthDate = DateTime.Parse("06/20/1977", culture)
});
InsertPatient(new Patient()
{
SSN = "999-99-0003",
FirstName = "Donna",
LastName = "Carreras",
BirthDate = DateTime.Parse("02/09/1973", culture)
});
InsertPatient(new Patient()
{
SSN = "999-99-0004",
FirstName = "Janet",
LastName = "Gates",
BirthDate = DateTime.Parse("08/31/1985", culture)
});
InsertPatient(new Patient()
{
SSN = "999-99-0005",
FirstName = "Lucy",
LastName = "Harrington",
BirthDate = DateTime.Parse("05/06/1993", culture)
BirthDate = DateTime.Parse("05/06/1993", culture)
});

// Fetch and display all patients.


Console.WriteLine(Environment.NewLine + "All the records currently in the Patients table:");

foreach (Patient patient in SelectAllPatients())


{
Console.WriteLine(patient.FirstName + " " + patient.LastName + "\tSSN: " + patient.SSN +
"\tBirthdate: " + patient.BirthDate);
}

// Get patients by SSN.


Console.WriteLine(Environment.NewLine + "Now let's locate records by searching the encrypted SSN
column.");

string ssn;

// This very simple validation only checks that the user entered 11 characters.
// In production be sure to check all user input and use the best validation for your specific
application.
do
{
Console.WriteLine("Please enter a valid SSN (ex. 123-45-6789):");
ssn = Console.ReadLine();
} while (ssn.Length != 11);

// The example allows duplicate SSN entries so we will return all records
// that match the provided value and store the results in selectedPatients.
Patient selectedPatient = SelectPatientBySSN(ssn);

// Check if any records were returned and display our query results.
if (selectedPatient != null)
{
Console.WriteLine("Patient found with SSN = " + ssn);
Console.WriteLine(selectedPatient.FirstName + " " + selectedPatient.LastName + "\tSSN: "
+ selectedPatient.SSN + "\tBirthdate: " + selectedPatient.BirthDate);
}
else
{
Console.WriteLine("No patients found with SSN = " + ssn);
}

Console.WriteLine("Press Enter to exit...");


Console.ReadLine();
}

static int InsertPatient(Patient newPatient)


{
int returnValue = 0;

string sqlCmdText = @"INSERT INTO [dbo].[Patients] ([SSN], [FirstName], [LastName], [BirthDate])


VALUES (@SSN, @FirstName, @LastName, @BirthDate);";

SqlCommand sqlCmd = new SqlCommand(sqlCmdText);

SqlParameter paramSSN = new SqlParameter(@"@SSN", newPatient.SSN);


paramSSN.DbType = DbType.AnsiStringFixedLength;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Size = 11;

SqlParameter paramFirstName = new SqlParameter(@"@FirstName", newPatient.FirstName);


paramFirstName.DbType = DbType.String;
paramFirstName.Direction = ParameterDirection.Input;

SqlParameter paramLastName = new SqlParameter(@"@LastName", newPatient.LastName);


paramLastName.DbType = DbType.String;
paramLastName.DbType = DbType.String;
paramLastName.Direction = ParameterDirection.Input;

SqlParameter paramBirthDate = new SqlParameter(@"@BirthDate", newPatient.BirthDate);


paramBirthDate.SqlDbType = SqlDbType.Date;
paramBirthDate.Direction = ParameterDirection.Input;

sqlCmd.Parameters.Add(paramSSN);
sqlCmd.Parameters.Add(paramFirstName);
sqlCmd.Parameters.Add(paramLastName);
sqlCmd.Parameters.Add(paramBirthDate);

using (sqlCmd.Connection = new SqlConnection(connectionString))


{
try
{
sqlCmd.Connection.Open();
sqlCmd.ExecuteNonQuery();
}
catch (Exception ex)
{
returnValue = 1;
Console.WriteLine("The following error was encountered: ");
Console.WriteLine(ex.Message);
Console.WriteLine(Environment.NewLine + "Press Enter key to exit");
Console.ReadLine();
Environment.Exit(0);
}
}
return returnValue;
}

static List<Patient> SelectAllPatients()


{
List<Patient> patients = new List<Patient>();

SqlCommand sqlCmd = new SqlCommand(


"SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients]",
new SqlConnection(connectionString));

using (sqlCmd.Connection = new SqlConnection(connectionString))

using (sqlCmd.Connection = new SqlConnection(connectionString))


{
try
{
sqlCmd.Connection.Open();
SqlDataReader reader = sqlCmd.ExecuteReader();

if (reader.HasRows)
{
while (reader.Read())
{
patients.Add(new Patient()
{
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
});
}
}
}
catch (Exception ex)
{
throw;
}
}
}

return patients;
}

static Patient SelectPatientBySSN(string ssn)


{
Patient patient = new Patient();

SqlCommand sqlCmd = new SqlCommand(


"SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients] WHERE [SSN]=@SSN",
new SqlConnection(connectionString));

SqlParameter paramSSN = new SqlParameter(@"@SSN", ssn);


paramSSN.DbType = DbType.AnsiStringFixedLength;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Size = 11;

sqlCmd.Parameters.Add(paramSSN);

using (sqlCmd.Connection = new SqlConnection(connectionString))


{
try
{
sqlCmd.Connection.Open();
SqlDataReader reader = sqlCmd.ExecuteReader();

if (reader.HasRows)
{
while (reader.Read())
{
patient = new Patient()
{
SSN = reader[0].ToString(),
FirstName = reader[1].ToString(),
LastName = reader["LastName"].ToString(),
BirthDate = (DateTime)reader["BirthDate"]
};
}
}
else
{
patient = null;
}
}
catch (Exception ex)
{
throw;
}
}
return patient;
}

// This method simply deletes all records in the Patients table to reset our demo.
static int ResetPatientsTable()
{
int returnValue = 0;

SqlCommand sqlCmd = new SqlCommand("DELETE FROM Patients");


using (sqlCmd.Connection = new SqlConnection(connectionString))
{
try
{
sqlCmd.Connection.Open();
sqlCmd.ExecuteNonQuery();
}
catch (Exception ex)
{
returnValue = 1;
}
}
return returnValue;
}
}

class Patient
{
public string SSN { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public DateTime BirthDate { get; set; }
}
}

Verify that the data is encrypted


You can quickly check that the actual data on the server is encrypted by querying the Patients data with SSMS.
(Use your current connection where the column encryption setting is not yet enabled.)
Run the following query on the Clinic database.

SELECT FirstName, LastName, SSN, BirthDate FROM Patients;

You can see that the encrypted columns do not contain any plaintext data.

To use SSMS to access the plaintext data, you can add the Column Encr yption Setting=enabled parameter
to the connection.
1. In SSMS, right-click your server in Object Explorer , and then click Disconnect .
2. Click Connect > Database Engine to open the Connect to Ser ver window, and then click Options .
3. Click Additional Connection Parameters and type Column Encr yption Setting=enabled .
4. Run the following query on the Clinic database.

SELECT FirstName, LastName, SSN, BirthDate FROM Patients;

You can now see the plaintext data in the encrypted columns.

NOTE
If you connect with SSMS (or any client) from a different computer, it will not have access to the encryption keys and will
not be able to decrypt the data.

Next steps
After you create a database that uses Always Encrypted, you may want to do the following:
Run this sample from a different computer. It won't have access to the encryption keys, so it will not have
access to the plaintext data and will not run successfully.
Rotate and clean up your keys.
Migrate data that is already encrypted with Always Encrypted.
Deploy Always Encrypted certificates to other client machines (see the "Making Certificates Available to
Applications and Users" section).

Related information
Always Encrypted (client development)
Transparent Data Encryption
SQL Server Encryption
Always Encrypted Wizard
Always Encrypted Blog
Detectable types of query performance bottlenecks
in Azure SQL Database and Azure SQL Managed
Instance
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


When trying to resolve a performance bottleneck, start by determining whether the bottleneck is occurring
while the query is in a running state or a waiting state. Different resolutions apply depending upon this
determination. Use the following diagram to help understand the factors that can cause either a running-related
problem or a waiting-related problem. Problems and resolutions relating to each type of problem are discussed
in this article.
You can use Intelligent Insights or SQL Server DMVs to detect these types of performance bottlenecks.

Running-related problems : Running-related problems are generally related to compilation problems


resulting in a suboptimal query plan or execution problems related to insufficient or overused resources.
Waiting-related problems : Waiting-related problems are generally related to:
Locks (blocking)
I/O
Contention related to tempdb usage
Memory grant waits

Compilation problems resulting in a suboptimal query plan


A suboptimal plan generated by the SQL Query Optimizer may be the cause of slow query performance. The
SQL Query Optimizer might produce a suboptimal plan because of a missing index, stale statistics, an incorrect
estimate of the number of rows to be processed, or an inaccurate estimate of the required memory. If you know
the query was executed faster in the past or on another instance, compare the actual execution plans to see if
they're different.
Identify any missing indexes using one of these methods:
Use Intelligent Insights.
Review recommendations in the Database Advisor for single and pooled databases in Azure SQL
Database. You may also choose to enable automatic tuning options for tuning indexes for Azure SQL
Database.
Missing indexes in DMVs and query execution plans. This article shows you how to detect and tune
nonclustered indexes using missing index requests.
Try to update statistics or rebuild indexes to get the better plan. Enable automatic plan correction in Azure
SQL Database or Azure SQL Managed Instance to automatically mitigate these problems.
As an advanced troubleshooting step, use Query Store hints to apply query hints using the Query Store,
without making code changes.
This example shows the impact of a suboptimal query plan due to a parameterized query, how to detect
this condition, and how to use a query hint to resolve.
Try changing the database compatibility level and implementing intelligent query processing. The SQL
Query Optimizer may generate a different query plan depending upon the compatibility level for your
database. Higher compatibility levels provide more intelligent query processing capabilities.
For more information on query processing, see Query Processing Architecture Guide.
To change database compatibility levels and read more about the differences between compatibility
levels, see ALTER DATABASE.
To read more about cardinality estimation, see Cardinality Estimation

Resolving queries with suboptimal query execution plans


The following sections discuss how to resolve queries with suboptimal query execution plan.
Queries that have parameter sensitive plan (PSP) problems
A parameter sensitive plan (PSP) problem happens when the query optimizer generates a query execution plan
that's optimal only for a specific parameter value (or set of values) and the cached plan is then not optimal for
parameter values that are used in consecutive executions. Plans that aren't optimal can then cause query
performance problems and degrade overall workload throughput.
For more information on parameter sniffing and query processing, see the Query-processing architecture guide.
Several workarounds can mitigate PSP problems. Each workaround has associated tradeoffs and drawbacks:
Use the RECOMPILE query hint at each query execution. This workaround trades compilation time and
increased CPU for better plan quality. The RECOMPILE option is often not possible for workloads that require a
high throughput.
Use the OPTION (OPTIMIZE FOR…) query hint to override the actual parameter value with a typical
parameter value that produces a plan that's good enough for most parameter value possibilities. This option
requires a good understanding of optimal parameter values and associated plan characteristics.
Use the OPTION (OPTIMIZE FOR UNKNOWN) query hint to override the actual parameter value and instead
use the density vector average. You can also do this by capturing the incoming parameter values in local
variables and then using the local variables within the predicates instead of using the parameters themselves.
For this fix, the average density must be good enough.
Disable parameter sniffing entirely by using the DISABLE_PARAMETER_SNIFFING query hint.
Use the KEEPFIXEDPLAN query hint to prevent recompilations in cache. This workaround assumes that the
good-enough common plan is the one in cache already. You can also disable automatic statistics updates to
reduce the chances that the good plan will be evicted and a new bad plan will be compiled.
Force the plan by explicitly using the USE PLAN query hint by rewriting the query and adding the hint in the
query text. Or set a specific plan by using Query Store or by enabling automatic tuning.
Replace the single procedure with a nested set of procedures that can each be used based on conditional
logic and the associated parameter values.
Create dynamic string execution alternatives to a static procedure definition.
For more information about resolving PSP problems, see these blog posts:
I smell a parameter
Conor vs. dynamic SQL vs. procedures vs. plan quality for parameterized queries
Compile activity caused by improper parameterization
When a query has literals, either the database engine automatically parameterizes the statement or a user
explicitly parameterizes the statement to reduce the number of compilations. A high number of compilations for
a query using the same pattern but different literal values can result in high CPU usage. Similarly, if you only
partially parameterize a query that continues to have literals, the database engine doesn't parameterize the
query further.
Here's an example of a partially parameterized query:

SELECT *
FROM t1 JOIN t2 ON t1.c1 = t2.c1
WHERE t1.c1 = @p1 AND t2.c2 = '961C3970-0E54-4E8E-82B6-5545BE897F8F';

In this example, t1.c1 takes @p1 , but t2.c2 continues to take GUID as literal. In this case, if you change the
value for c2 , the query is treated as a different query, and a new compilation will happen. To reduce
compilations in this example, you would also parameterize the GUID.
The following query shows the count of queries by query hash to determine whether a query is properly
parameterized:

SELECT TOP 10
q.query_hash
, count (distinct p.query_id ) AS number_of_distinct_query_ids
, min(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats AS rs
ON rs.plan_id = p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi
ON rsi.runtime_stats_interval_id = rs.runtime_stats_interval_id
WHERE
rsi.start_time >= DATEADD(hour, -2, GETUTCDATE())
AND query_parameterization_type_desc IN ('User', 'None')
GROUP BY q.query_hash
ORDER BY count (distinct p.query_id) DESC;

Factors that affect query plan changes


A query execution plan recompilation might result in a generated query plan that differs from the original
cached plan. An existing original plan might be automatically recompiled for various reasons:
Changes in the schema are referenced by the query
Data changes to the tables are referenced by the query
Query context options were changed
A compiled plan might be ejected from the cache for various reasons, such as:
Instance restarts
Database-scoped configuration changes
Memory pressure
Explicit requests to clear the cache
If you use a RECOMPILE hint, a plan won't be cached.
A recompilation (or fresh compilation after cache eviction) can still result in the generation of a query execution
plan that's identical to the original. When the plan changes from the prior or original plan, these explanations
are likely:
Changed physical design : For example, newly created indexes more effectively cover the requirements
of a query. The new indexes might be used on a new compilation if the query optimizer decides that
using that new index is more optimal than using the data structure that was originally selected for the
first version of the query execution. Any physical changes to the referenced objects might result in a new
plan choice at compile time.
Ser ver resource differences : When a plan in one system differs from the plan in another system,
resource availability, such as the number of available processors, can influence which plan gets
generated. For example, if one system has more processors, a parallel plan might be chosen. For more
information on parallelism in Azure SQL Database, see Configure the max degree of parallelism
(MAXDOP) in Azure SQL Database.
Different statistics : The statistics associated with the referenced objects might have changed or might
be materially different from the original system's statistics. If the statistics change and a recompilation
happens, the query optimizer uses the statistics starting from when they changed. The revised statistics'
data distributions and frequencies might differ from those of the original compilation. These changes are
used to create cardinality estimates. (Cardinality estimates are the number of rows that are expected to
flow through the logical query tree.) Changes to cardinality estimates might lead you to choose different
physical operators and associated orders of operations. Even minor changes to statistics can result in a
changed query execution plan.
Changed database compatibility level or cardinality estimator version : Changes to the database
compatibility level can enable new strategies and features that might result in a different query execution
plan. Beyond the database compatibility level, a disabled or enabled trace flag 4199 or a changed state of
the database-scoped configuration QUERY_OPTIMIZER_HOTFIXES can also influence query execution
plan choices at compile time. Trace flags 9481 (force legacy CE) and 2312 (force default CE) also affect the
plan.

Resource limits issues


Slow query performance not related to suboptimal query plans and missing indexes are generally related to
insufficient or overused resources. If the query plan is optimal, the query (and the database) might be hitting the
resource limits for the database, elastic pool, or managed instance. An example might be excess log write
throughput for the service level.
Detecting resource issues using the Azure portal: To see if resource limits are the problem, see SQL
Database resource monitoring. For single databases and elastic pools, see Database Advisor performance
recommendations and Query Performance Insights.
Detecting resource limits using Intelligent Insights
Detecting resource issues using DMVs:
The sys.dm_db_resource_stats DMV returns CPU, I/O, and memory consumption for the database.
One row exists for every 15-second interval, even if there's no activity in the database. Historical data
is maintained for one hour.
The sys.resource_stats DMV returns CPU usage and storage data for Azure SQL Database. The data is
collected and aggregated in five-minute intervals.
Many individual queries that cumulatively consume high CPU
If you identify the problem as insufficient resource, you can upgrade resources to increase the capacity of your
database to absorb the CPU requirements. For more information, see Scale single database resources in Azure
SQL Database and Scale elastic pool resources in Azure SQL Database. For information about scaling a managed
instance, see Service-tier resource limits

Performance problems caused by increased workload volume


An increase in application traffic and workload volume can cause increased CPU usage. But you must be careful
to properly diagnose this problem. When you see a high-CPU problem, answer these questions to determine
whether the increase is caused by changes to the workload volume:
Are the queries from the application the cause of the high-CPU problem?
For the top CPU-consuming queries that you can identify:
Were multiple execution plans associated with the same query? If so, why?
For queries with the same execution plan, were the execution times consistent? Did the execution
count increase? If so, the workload increase is likely causing performance problems.
In summary, if the query execution plan didn't execute differently but CPU usage increased along with execution
count, the performance problem is likely related to a workload increase.
It's not always easy to identify a workload volume change that's driving a CPU problem. Consider these factors:
Changed resource usage : For example, consider a scenario where CPU usage increased to 80 percent
for an extended period of time. CPU usage alone doesn't mean the workload volume changed.
Regressions in the query execution plan and changes in data distribution can also contribute to more
resource usage even though the application executes the same workload.
The appearance of a new quer y : An application might drive a new set of queries at different times.
An increase or decrease in the number of requests : This scenario is the most obvious measure of a
workload. The number of queries doesn't always correspond to more resource utilization. However, this
metric is still a significant signal, assuming other factors are unchanged.
Use Intelligent Insights to detect workload increases and plan regressions.
Parallelism : Excessive parallelism can worsen cause other concurrent workload performance by starving
other queries of CPU and worker thread resources. For more information on parallelism in Azure SQL
Database, see Configure the max degree of parallelism (MAXDOP) in Azure SQL Database.

Waiting-related problems
Once you have eliminated a suboptimal plan and Waiting-related problems that are related to execution
problems, the performance problem is generally the queries are probably waiting for some resource. Waiting-
related problems might be caused by:
Blocking :
One query might hold the lock on objects in the database while others try to access the same objects. You
can identify blocking queries by using DMVs or Intelligent Insights. For more information, see Understand
and resolve Azure SQL blocking problems.
IO problems
Queries might be waiting for the pages to be written to the data or log files. In this case, check the
INSTANCE_LOG_RATE_GOVERNOR , WRITE_LOG , or PAGEIOLATCH_* wait statistics in the DMV. See using DMVs to
identify IO performance issues.
Tempdb problems
If the workload uses temporary tables or there are tempdb spills in the plans, the queries might have a
problem with tempdb throughput. To investigate further, review identify tempdb issues.
Memor y-related problems
If the workload doesn't have enough memory, the page life expectancy might drop, or the queries might
get less memory than they need. In some cases, built-in intelligence in Query Optimizer will fix memory-
related problems. See using DMVs to identify memory grant issues. For more information and sample
queries, see Troubleshoot out of memory errors with Azure SQL Database. If you encounter out of
memory errors, review sys.dm_os_out_of_memory_events.
Methods to show top wait categories
These methods are commonly used to show the top categories of wait types:
Use Intelligent Insights to identify queries with performance degradation due to increased waits
Use Query Store to find wait statistics for each query over time. In Query Store, wait types are combined into
wait categories. You can find the mapping of wait categories to wait types in sys.query_store_wait_stats.
Use sys.dm_db_wait_stats to return information about all the waits encountered by threads that executed
during a query operation. You can use this aggregated view to diagnose performance problems with Azure
SQL Database and also with specific queries and batches. Queries can be waiting on resources, queue waits,
or external waits.
Use sys.dm_os_waiting_tasks to return information about the queue of tasks that are waiting on some
resource.
In high-CPU scenarios, Query Store and wait statistics might not reflect CPU usage if:
High-CPU-consuming queries are still executing.
The high-CPU-consuming queries were running when a failover happened.
DMVs that track Query Store and wait statistics show results for only successfully completed and timed-out
queries. They don't show data for currently executing statements until the statements finish. Use the dynamic
management view sys.dm_exec_requests to track currently executing queries and the associated worker time.

TIP
Additional tools:
TigerToolbox waits and latches
TigerToolbox usp_whatsup

Next steps
Configure the max degree of parallelism (MAXDOP) in Azure SQL Database
Understand and resolve Azure SQL Database blocking problems in Azure SQL Database
Diagnose and troubleshoot high CPU on Azure SQL Database
SQL Database monitoring and tuning overview
Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using dynamic
management views
Tune nonclustered indexes with missing index suggestions
Troubleshoot Azure SQL Database and Azure SQL
Managed Instance performance issues with
Intelligent Insights
7/12/2022 • 24 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This page provides information on Azure SQL Database and Azure SQL Managed Instance performance issues
detected through the Intelligent Insights resource log. Metrics and resource logs can be streamed to Azure
Monitor logs, Azure Event Hubs, Azure Storage, or a third-party solution for custom DevOps alerting and
reporting capabilities.

NOTE
For a quick performance troubleshooting guide using Intelligent Insights, see the Recommended troubleshooting flow
flowchart in this document.
Intelligent insights is a preview feature, not available in the following regions: West Europe, North Europe, West US 1 and
East US 1.

Detectable database performance patterns


Intelligent Insights automatically detects performance issues based on query execution wait times, errors, or
time-outs. Intelligent Insights outputs detected performance patterns to the resource log. Detectable
performance patterns are summarized in the table below.

DET EC TA B L E P ERF O RM A N C E
PAT T ERN S A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Reaching resource limits Consumption of available resources Consumption of CPU resources is


(DTUs), database worker threads, or reaching its resource limits. This is
database login sessions available on affecting the database performance.
the monitored subscription has
reached its resource limits. This is
affecting performance.

Workload increase Workload increase or continuous Workload increase has been detected.
accumulation of workload on the This is affecting the database
database was detected. This is affecting performance.
performance.

Memory pressure Workers that requested memory Workers that have requested memory
grants have to wait for memory grants are waiting for memory
allocations for statistically significant allocations for a statistically significant
amounts of time, or an increased amount of time. This is affecting the
accumulation of workers that database performance.
requested memory grants exist. This is
affecting performance.
DET EC TA B L E P ERF O RM A N C E
PAT T ERN S A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Locking Excessive database locking was Excessive database locking was


detected affecting performance. detected affecting the database
performance.

Increased MAXDOP The maximum degree of parallelism The maximum degree of parallelism
option (MAXDOP) has changed option (MAXDOP) has changed
affecting the query execution efficiency. affecting the query execution efficiency.
This is affecting performance. This is affecting performance.

Pagelatch contention Multiple threads are concurrently Multiple threads are concurrently
attempting to access the same in- attempting to access the same in-
memory data buffer pages resulting in memory data buffer pages resulting in
increased wait times and causing increased wait times and causing
pagelatch contention. This is affecting pagelatch contention. This is affecting
performance. database the performance.

Missing Index Missing index was detected affecting Missing index was detected affecting
performance. the database performance.

New Query New query was detected affecting the New query was detected affecting the
overall performance. overall database performance.

Increased Wait Statistic Increased database wait times were Increased database wait times were
detected affecting performance. detected affecting the database
performance.

TempDB Contention Multiple threads are trying to access Multiple threads are trying to access
the same TempDB resource causing a the same TempDB resource causing a
bottleneck. This is affecting bottleneck. This is affecting the
performance. database performance.

Elastic pool DTU shortage Shortage of available eDTUs in the Not available for Azure SQL Managed
elastic pool is affecting performance. Instance as it uses the vCore model.

Plan Regression New plan, or a change in the workload New plan, or a change in the workload
of an existing plan was detected. This is of an existing plan was detected. This is
affecting performance. affecting the database performance.

Database-scoped configuration value Configuration change on the database Configuration change on the database
change was detected affecting the database was detected affecting the database
performance. performance.

Slow client Slow application client is unable to Slow application client is unable to
consume output from the database consume output from the database
fast enough. This is affecting fast enough. This is affecting the
performance. database performance.

Pricing tier downgrade Pricing tier downgrade action Pricing tier downgrade action
decreased available resources. This is decreased available resources. This is
affecting performance. affecting the database performance.
TIP
For continuous performance optimization of databases, enable automatic tuning. This built-in intelligence feature
continuously monitors your database, automatically tunes indexes, and applies query execution plan corrections.

The following section describes detectable performance patterns in more detail.

Reaching resource limits


What is happening
This detectable performance pattern combines performance issues that are related to reaching available
resource limits, worker limits, and session limits. After this performance issue is detected, a description field of
the diagnostics log indicates whether the performance issue is related to resource, worker, or session limits.
Resources on Azure SQL Database are typically referred to DTU or vCore resources, and resources on Azure SQL
Managed Instance are referred to as vCore resources. The pattern of reaching resource limits is recognized
when detected query performance degradation is caused by reaching any of the measured resource limits.
The session limits resource denotes the number of available concurrent logins to the database. This
performance pattern is recognized when applications that are connected to the databases have reached the
number of available concurrent logins to the database. If applications attempt to use more sessions than are
available on a database, the query performance is affected.
Reaching worker limits is a specific case of reaching resource limits because available workers aren't counted in
the DTU or vCore usage. Reaching worker limits on a database can cause the rise of resource-specific wait times,
which results in query performance degradation.
Troubleshooting
The diagnostics log outputs query hashes of queries that affected the performance and resource consumption
percentages. You can use this information as a starting point for optimizing your database workload. In
particular, you can optimize the queries that affect the performance degradation by adding indexes. Or you can
optimize applications with a more even workload distribution. If you're unable to reduce workloads or make
optimizations, consider increasing the pricing tier of your database subscription to increase the amount of
resources available.
If you have reached the available session limits, you can optimize your applications by reducing the number of
logins made to the database. If you're unable to reduce the number of logins from your applications to the
database, consider increasing the pricing tier of your database subscription. Or you can split and move your
database into multiple databases for a more balanced workload distribution.
For more suggestions on resolving session limits, see How to deal with the limits of maximum logins. See
Overview of resource limits on a server for information about limits at the server and subscription levels.

Workload increase
What is happening
This performance pattern identifies issues caused by a workload increase or, in its more severe form, a workload
pile-up.
This detection is made through a combination of several metrics. The basic metric measured is detecting an
increase in workload compared with the past workload baseline. The other form of detection is based on
measuring a large increase in active worker threads that is large enough to affect the query performance.
In its more severe form, the workload might continuously pile up due to the inability of a database to handle the
workload. The result is a continuously growing workload size, which is the workload pile-up condition. Due to
this condition, the time that the workload waits for execution grows. This condition represents one of the most
severe database performance issues. This issue is detected through monitoring the increase in the number of
aborted worker threads.
Troubleshooting
The diagnostics log outputs the number of queries whose execution has increased and the query hash of the
query with the largest contribution to the workload increase. You can use this information as a starting point for
optimizing the workload. The query identified as the largest contributor to the workload increase is especially
useful as your starting point.
You might consider distributing the workloads more evenly to the database. Consider optimizing the query that
is affecting the performance by adding indexes. You also might distribute your workload among multiple
databases. If these solutions aren't possible, consider increasing the pricing tier of your database subscription to
increase the amount of resources available.

Memory pressure
What is happening
This performance pattern indicates degradation in the current database performance caused by memory
pressure, or in its more severe form a memory pile-up condition, compared to the past seven-day performance
baseline.
Memory pressure denotes a performance condition in which there is a large number of worker threads
requesting memory grants. The high volume causes a high memory utilization condition in which the database
is unable to efficiently allocate memory to all workers that request it. One of the most common reasons for this
issue is related to the amount of memory available to the database on one hand. On the other hand, an increase
in workload causes the increase in worker threads and the memory pressure.
The more severe form of memory pressure is the memory pile-up condition. This condition indicates that a
higher number of worker threads are requesting memory grants than there are queries releasing the memory.
This number of worker threads requesting memory grants also might be continuously increasing (piling up)
because the database engine is unable to allocate memory efficiently enough to meet the demand. The memory
pile-up condition represents one of the most severe database performance issues.
Troubleshooting
The diagnostics log outputs the memory object store details with the clerk (that is, worker thread) marked as the
highest reason for high memory usage and relevant time stamps. You can use this information as the basis for
troubleshooting.
You can optimize or remove queries related to the clerks with the highest memory usage. You also can make
sure that you aren't querying data that you don't plan to use. Good practice is to always use a WHERE clause in
your queries. In addition, we recommend that you create nonclustered indexes to seek the data rather than scan
it.
You also can reduce the workload by optimizing or distributing it over multiple databases. Or you can distribute
your workload among multiple databases. If these solutions aren't possible, consider increasing the pricing tier
of your database to increase the amount of memory resources available to the database.
For additional troubleshooting suggestions, see Memory grants meditation: The mysterious SQL Server
memory consumer with many names. For more information on out of memory errors in Azure SQL Database,
see Troubleshoot out of memory errors with Azure SQL Database.

Locking
What is happening
This performance pattern indicates degradation in the current database performance in which excessive
database locking is detected compared to the past seven-day performance baseline.
In modern RDBMS, locking is essential for implementing multithreaded systems in which performance is
maximized by running multiple simultaneous workers and parallel database transactions where possible.
Locking in this context refers to the built-in access mechanism in which only a single transaction can exclusively
access the rows, pages, tables, and files that are required and not compete with another transaction for
resources. When the transaction that locked the resources for use is done with them, the lock on those resources
is released, which allows other transactions to access required resources. For more information on locking, see
Lock in the database engine.
If transactions executed by the SQL engine are waiting for prolonged periods of time to access resources locked
for use, this wait time causes the slowdown of the workload execution performance.
Troubleshooting
The diagnostics log outputs locking details that you can use as the basis for troubleshooting. You can analyze the
reported blocking queries, that is, the queries that introduce the locking performance degradation, and remove
them. In some cases, you might be successful in optimizing the blocking queries.
The simplest and safest way to mitigate the issue is to keep transactions short and to reduce the lock footprint
of the most expensive queries. You can break up a large batch of operations into smaller operations. Good
practice is to reduce the query lock footprint by making the query as efficient as possible. Reduce large scans
because they increase chances of deadlocks and adversely affect overall database performance. For identified
queries that cause locking, you can create new indexes or add columns to the existing index to avoid the table
scans.
For more suggestions, see:
Understand and resolve Azure SQL blocking problems
How to resolve blocking problems that are caused by lock escalation in SQL Server

Increased MAXDOP
What is happening
This detectable performance pattern indicates a condition in which a chosen query execution plan was
parallelized more than it should have been. The query optimizer can enhance the workload performance by
executing queries in parallel to speed up things where possible. In some cases, parallel workers processing a
query spend more time waiting on each other to synchronize and merge results compared to executing the
same query with fewer parallel workers, or even in some cases compared to a single worker thread.
The expert system analyzes the current database performance compared to the baseline period. It determines if
a previously running query is running slower than before because the query execution plan is more parallelized
than it should be.
The MAXDOP server configuration option is used to control how many CPU cores can be used to execute the
same query in parallel.
Troubleshooting
The diagnostics log outputs query hashes related to queries for which the duration of execution increased
because they were parallelized more than they should have been. The log also outputs CXP wait times. This time
represents the time a single organizer/coordinator thread (thread 0) is waiting for all other threads to finish
before merging the results and moving ahead. In addition, the diagnostics log outputs the wait times that the
poor-performing queries were waiting in execution overall. You can use this information as the basis for
troubleshooting.
First, optimize or simplify complex queries. Good practice is to break up long batch jobs into smaller ones. In
addition, ensure that you created indexes to support your queries. You can also manually enforce the maximum
degree of parallelism (MAXDOP) for a query that was flagged as poor performing. To configure this operation
by using T-SQL, see Configure the MAXDOP server configuration option.
Setting the MAXDOP server configuration option to zero (0) as a default value denotes that database can use all
available CPU cores to parallelize threads for executing a single query. Setting MAXDOP to one (1) denotes that
only one core can be used for a single query execution. In practical terms, this means that parallelism is turned
off. Depending on the case-per-case basis, available cores to the database, and diagnostics log information, you
can tune the MAXDOP option to the number of cores used for parallel query execution that might resolve the
issue in your case.

Pagelatch contention
What is happening
This performance pattern indicates the current database workload performance degradation due to pagelatch
contention compared to the past seven-day workload baseline.
Latches are lightweight synchronization mechanisms used to enable multithreading. They guarantee consistency
of in-memory structures that include indices, data pages, and other internal structures.
There are many types of latches available. For simplicity purposes, buffer latches are used to protect in-memory
pages in the buffer pool. IO latches are used to protect pages not yet loaded into the buffer pool. Whenever data
is written to or read from a page in the buffer pool, a worker thread needs to acquire a buffer latch for the page
first. Whenever a worker thread attempts to access a page that isn't already available in the in-memory buffer
pool, an IO request is made to load the required information from the storage. This sequence of events indicates
a more severe form of performance degradation.
Contention on the page latches occurs when multiple threads concurrently attempt to acquire latches on the
same in-memory structure, which introduces an increased wait time to query execution. In the case of pagelatch
IO contention, when data needs to be accessed from storage, this wait time is even larger. It can affect workload
performance considerably. Pagelatch contention is the most common scenario of threads waiting on each other
and competing for resources on multiple CPU systems.
Troubleshooting
The diagnostics log outputs pagelatch contention details. You can use this information as the basis for
troubleshooting.
Because a pagelatch is an internal control mechanism, it automatically determines when to use them.
Application decisions, including schema design, can affect pagelatch behavior due to the deterministic behavior
of latches.
One method for handling latch contention is to replace a sequential index key with a nonsequential key to
evenly distribute inserts across an index range. Typically, a leading column in the index distributes the workload
proportionally. Another method to consider is table partitioning. Creating a hash partitioning scheme with a
computed column on a partitioned table is a common approach for mitigating excessive latch contention. In the
case of pagelatch IO contention, introducing indexes helps to mitigate this performance issue.
For more information, see Diagnose and resolve latch contention on SQL Server (PDF download).

Missing index
What is happening
This performance pattern indicates the current database workload performance degradation compared to the
past seven-day baseline due to a missing index.
An index is used to speed up the performance of queries. It provides quick access to table data by reducing the
number of dataset pages that need to be visited or scanned.
Specific queries that caused performance degradation are identified through this detection for which creating
indexes would be beneficial to the performance.
Troubleshooting
The diagnostics log outputs query hashes for the queries that were identified to affect the workload
performance. You can build indexes for these queries. You also can optimize or remove these queries if they
aren't required. A good performance practice is to avoid querying data that you don't use.

TIP
Did you know that built-in intelligence can automatically manage the best-performing indexes for your databases?
For continuous performance optimization, we recommend that you enable automatic tuning. This unique built-in
intelligence feature continuously monitors your database and automatically tunes and creates indexes for your databases.

New query
What is happening
This performance pattern indicates that a new query is detected that is performing poorly and affecting the
workload performance compared to the seven-day performance baseline.
Writing a good-performing query sometimes can be a challenging task. For more information on writing
queries, see Writing SQL queries. To optimize existing query performance, see Query tuning.
Troubleshooting
The diagnostics log outputs information up to two new most CPU-consuming queries, including their query
hashes. Because the detected query affects the workload performance, you can optimize your query. Good
practice is to retrieve only data you need to use. We also recommend using queries with a WHERE clause. We
also recommend that you simplify complex queries and break them up into smaller queries. Another good
practice is to break down large batch queries into smaller batch queries. Introducing indexes for new queries is
typically a good practice to mitigate this performance issue.
In Azure SQL Database, consider using Query Performance Insight.

Increased wait statistic


What is happening
This detectable performance pattern indicates a workload performance degradation in which poor-performing
queries are identified compared to the past seven-day workload baseline.
In this case, the system can't classify the poor-performing queries under any other standard detectable
performance categories, but it detected the wait statistic responsible for the regression. Therefore, it considers
them as queries with increased wait statistics, where the wait statistic responsible for the regression is also
exposed.
Troubleshooting
The diagnostics log outputs information on increased wait time details and query hashes of the affected queries.
Because the system couldn't successfully identify the root cause for the poor-performing queries, the diagnostics
information is a good starting point for manual troubleshooting. You can optimize the performance of these
queries. A good practice is to fetch only data you need to use and to simplify and break down complex queries
into smaller ones.
For more information on optimizing query performance, see Query tuning.

TempDB contention
What is happening
This detectable performance pattern indicates a database performance condition in which a bottleneck of
threads trying to access tempDB resources exists. (This condition isn't IO-related.) The typical scenario for this
performance issue is hundreds of concurrent queries that all create, use, and then drop small tempDB tables.
The system detected that the number of concurrent queries using the same tempDB tables increased with
sufficient statistical significance to affect database performance compared to the past seven-day performance
baseline.
Troubleshooting
The diagnostics log outputs tempDB contention details. You can use the information as the starting point for
troubleshooting. There are two things you can pursue to alleviate this kind of contention and increase the
throughput of the overall workload: You can stop using the temporary tables. You also can use memory-
optimized tables.
For more information, see Introduction to memory-optimized tables.

Elastic pool DTU shortage


What is happening
This detectable performance pattern indicates a degradation in the current database workload performance
compared to the past seven-day baseline. It's due to the shortage of available DTUs in the elastic pool of your
subscription.
Azure elastic pool resources are used as a pool of available resources shared between multiple databases for
scaling purposes. When available eDTU resources in your elastic pool aren't sufficiently large to support all the
databases in the pool, an elastic pool DTU shortage performance issue is detected by the system.
Troubleshooting
The diagnostics log outputs information on the elastic pool, lists the top DTU-consuming databases, and
provides a percentage of the pool's DTU used by the top-consuming database.
Because this performance condition is related to multiple databases using the same pool of eDTUs in the elastic
pool, the troubleshooting steps focus on the top DTU-consuming databases. You can reduce the workload on the
top-consuming databases, which includes optimization of the top-consuming queries on those databases. You
also can ensure that you aren't querying data that you don't use. Another approach is to optimize applications
by using the top DTU-consuming databases and redistribute the workload among multiple databases.
If reduction and optimization of the current workload on your top DTU-consuming databases aren't possible,
consider increasing your elastic pool pricing tier. Such increase results in the increase of the available DTUs in
the elastic pool.

Plan regression
What is happening
This detectable performance pattern denotes a condition in which the database utilizes a suboptimal query
execution plan. The suboptimal plan typically causes increased query execution, which leads to longer wait times
for the current and other queries.
The database engine determines the query execution plan with the least cost to a query execution. As the type of
queries and workloads change, sometimes the existing plans are no longer efficient, or perhaps the database
engine didn't make a good assessment. As a matter of correction, query execution plans can be manually forced.
This detectable performance pattern combines three different cases of plan regression: new plan regression, old
plan regression, and existing plans changed workload. The particular type of plan regression that occurred is
provided in the details property in the diagnostics log.
The new plan regression condition refers to a state in which the database engine starts executing a new query
execution plan that isn't as efficient as the old plan. The old plan regression condition refers to the state when
the database engine switches from using a new, more efficient plan to the old plan, which isn't as efficient as the
new plan. The existing plans changed workload regression refers to the state in which the old and the new plans
continuously alternate, with the balance going more toward the poor-performing plan.
For more information on plan regressions, see What is plan regression in SQL Server?.
Troubleshooting
The diagnostics log outputs the query hashes, good plan ID, bad plan ID, and query IDs. You can use this
information as the basis for troubleshooting.
You can analyze which plan is better performing for your specific queries that you can identify with the query
hashes provided. After you determine which plan works better for your queries, you can manually force it.
For more information, see Learn how SQL Server prevents plan regressions.

TIP
Did you know that the built-in intelligence feature can automatically manage the best-performing query execution plans
for your databases?
For continuous performance optimization, we recommend that you enable automatic tuning. This built-in intelligence
feature continuously monitors your database and automatically tunes and creates best-performing query execution plans
for your databases.

Database-scoped configuration value change


What is happening
This detectable performance pattern indicates a condition in which a change in the database-scoped
configuration causes performance regression that is detected compared to the past seven-day database
workload behavior. This pattern denotes that a recent change made to the database-scoped configuration
doesn't seem to be beneficial to your database performance.
Database-scoped configuration changes can be set for each individual database. This configuration is used on a
case-by-case basis to optimize the individual performance of your database. The following options can be
configured for each individual database: MAXDOP, LEGACY_CARDINALITY_ESTIMATION, PARAMETER_SNIFFING,
QUERY_OPTIMIZER_HOTFIXES, and CLEAR PROCEDURE_CACHE.
Troubleshooting
The diagnostics log outputs database-scoped configuration changes that were made recently that caused
performance degradation compared to the previous seven-day workload behavior. You can revert the
configuration changes to the previous values. You also can tune value by value until the desired performance
level is reached. You can copy database-scope configuration values from a similar database with satisfactory
performance. If you're unable to troubleshoot the performance, revert to the default values and attempt to fine-
tune starting from this baseline.
For more information on optimizing database-scoped configuration and T-SQL syntax on changing the
configuration, see Alter database-scoped configuration (Transact-SQL).
Slow client
What is happening
This detectable performance pattern indicates a condition in which the client using the database can't consume
the output from the database as fast as the database sends the results. Because the database isn't storing results
of the executed queries in a buffer, it slows down and waits for the client to consume the transmitted query
outputs before proceeding. This condition also might be related to a network that isn't sufficiently fast enough to
transmit outputs from the database to the consuming client.
This condition is generated only if a performance regression is detected compared to the past seven-day
database workload behavior. This performance issue is detected only if a statistically significant performance
degradation occurs compared to previous performance behavior.
Troubleshooting
This detectable performance pattern indicates a client-side condition. Troubleshooting is required at the client-
side application or client-side network. The diagnostics log outputs the query hashes and wait times that seem
to be waiting the most for the client to consume them within the past two hours. You can use this information as
the basis for troubleshooting.
You can optimize performance of your application for consumption of these queries. You also can consider
possible network latency issues. Because the performance degradation issue was based on change in the last
seven-day performance baseline, you can investigate whether recent application or network condition changes
caused this performance regression event.

Pricing tier downgrade


What is happening
This detectable performance pattern indicates a condition in which the pricing tier of your database subscription
was downgraded. Because of reduction of resources (DTUs) available to the database, the system detected a
drop in the current database performance compared to the past seven-day baseline.
In addition, there could be a condition in which the pricing tier of your database subscription was downgraded
and then upgraded to a higher tier within a short period of time. Detection of this temporary performance
degradation is outputted in the details section of the diagnostics log as a pricing tier downgrade and upgrade.
Troubleshooting
If you reduced your pricing tier, and therefore the DTUs available, and you're satisfied with the performance,
there's nothing you need to do. If you reduced your pricing tier and you're unsatisfied with your database
performance, reduce your database workloads or consider increasing the pricing tier to a higher level.

Recommended troubleshooting flow


Follow the flowchart for a recommended approach to troubleshoot performance issues by using Intelligent
Insights.
Access Intelligent Insights through the Azure portal by going to Azure SQL Analytics. Attempt to locate the
incoming performance alert, and select it. Identify what is happening on the detections page. Observe the
provided root cause analysis of the issue, query text, query time trends, and incident evolution. Attempt to
resolve the issue by using the Intelligent Insights recommendation for mitigating the performance issue.
TIP
Select the flowchart to download a PDF version.

Intelligent Insights usually needs one hour of time to perform the root cause analysis of the performance issue.
If you can't locate your issue in Intelligent Insights and it's critical to you, use the Query Store to manually
identify the root cause of the performance issue. (Typically, these issues are less than one hour old.) For more
information, see Monitor performance by using the Query Store.

Next steps
Learn Intelligent Insights concepts.
Use the Intelligent Insights performance diagnostics log.
Monitor using Azure SQL Analytics.
Learn to collect and consume log data from your Azure resources.
How to use batching to improve Azure SQL
Database and Azure SQL Managed Instance
application performance
7/12/2022 • 26 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Batching operations to Azure SQL Database and Azure SQL Managed Instance significantly improves the
performance and scalability of your applications. In order to understand the benefits, the first part of this article
covers some sample test results that compare sequential and batched requests to a database in Azure SQL
Database or Azure SQL Managed Instance. The remainder of the article shows the techniques, scenarios, and
considerations to help you to use batching successfully in your Azure applications.

Why is batching important for Azure SQL Database and Azure SQL
Managed Instance?
Batching calls to a remote service is a well-known strategy for increasing performance and scalability. There are
fixed processing costs to any interactions with a remote service, such as serialization, network transfer, and
deserialization. Packaging many separate transactions into a single batch minimizes these costs.
In this article, we want to examine various batching strategies and scenarios. Although these strategies are also
important for on-premises applications that use SQL Server, there are several reasons for highlighting the use
of batching for Azure SQL Database and Azure SQL Managed Instance:
There is potentially greater network latency in accessing Azure SQL Database and Azure SQL Managed
Instance, especially if you are accessing Azure SQL Database or Azure SQL Managed Instance from outside
the same Microsoft Azure datacenter.
The multitenant characteristics of Azure SQL Database and Azure SQL Managed Instance means that the
efficiency of the data access layer correlates to the overall scalability of the database. In response to usage in
excess of predefined quotas, Azure SQL Database and Azure SQL Managed Instance can reduce throughput
or respond with throttling exceptions. Efficiencies, such as batching, enable you to do more work before
reaching these limits.
Batching is also effective for architectures that use multiple databases (sharding). The efficiency of your
interaction with each database unit is still a key factor in your overall scalability.
One of the benefits of using Azure SQL Database or Azure SQL Managed Instance is that you don't have to
manage the servers that host the database. However, this managed infrastructure also means that you have to
think differently about database optimizations. You can no longer look to improve the database hardware or
network infrastructure. Microsoft Azure controls those environments. The main area that you can control is how
your application interacts with Azure SQL Database and Azure SQL Managed Instance. Batching is one of these
optimizations.
The first part of this article examines various batching techniques for .NET applications that use Azure SQL
Database or Azure SQL Managed Instance. The last two sections cover batching guidelines and scenarios.

Batching strategies
Note about timing results in this article
NOTE
Results are not benchmarks but are meant to show relative performance . Timings are based on an average of at least
10 test runs. Operations are inserts into an empty table. These tests were measured pre-V12, and they do not necessarily
correspond to throughput that you might experience in a V12 database using the new DTU service tiers or vCore service
tiers. The relative benefit of the batching technique should be similar.

Transactions
It seems strange to begin a review of batching by discussing transactions. But the use of client-side transactions
has a subtle server-side batching effect that improves performance. And transactions can be added with only a
few lines of code, so they provide a fast way to improve performance of sequential operations.
Consider the following C# code that contains a sequence of insert and update operations on a simple table.

List<string> dbOperations = new List<string>();


dbOperations.Add("update MyTable set mytext = 'updated text' where id = 1");
dbOperations.Add("update MyTable set mytext = 'updated text' where id = 2");
dbOperations.Add("update MyTable set mytext = 'updated text' where id = 3");
dbOperations.Add("insert MyTable values ('new value',1)");
dbOperations.Add("insert MyTable values ('new value',2)");
dbOperations.Add("insert MyTable values ('new value',3)");

The following ADO.NET code sequentially performs these operations.

using (SqlConnection connection = new


SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
conn.Open();

foreach(string commandString in dbOperations)


{
SqlCommand cmd = new SqlCommand(commandString, conn);
cmd.ExecuteNonQuery();
}
}

The best way to optimize this code is to implement some form of client-side batching of these calls. But there is
a simple way to increase the performance of this code by simply wrapping the sequence of calls in a transaction.
Here is the same code that uses a transaction.

using (SqlConnection connection = new


SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
conn.Open();
SqlTransaction transaction = conn.BeginTransaction();

foreach (string commandString in dbOperations)


{
SqlCommand cmd = new SqlCommand(commandString, conn, transaction);
cmd.ExecuteNonQuery();
}

transaction.Commit();
}

Transactions are actually being used in both of these examples. In the first example, each individual call is an
implicit transaction. In the second example, an explicit transaction wraps all of the calls. Per the documentation
for the write-ahead transaction log, log records are flushed to the disk when the transaction commits. So by
including more calls in a transaction, the write to the transaction log can delay until the transaction is
committed. In effect, you are enabling batching for the writes to the server's transaction log.
The following table shows some ad hoc testing results. The tests performed the same sequential inserts with
and without transactions. For more perspective, the first set of tests ran remotely from a laptop to the database
in Microsoft Azure. The second set of tests ran from a cloud service and database that both resided within the
same Microsoft Azure datacenter (West US). The following table shows the duration in milliseconds of
sequential inserts with and without transactions.
On-premises to Azure :

O P ERAT IO N S N O T RA N SA C T IO N ( M S) T RA N SA C T IO N ( M S)

1 130 402

10 1208 1226

100 12662 10395

1000 128852 102917

Azure to Azure (same datacenter) :

O P ERAT IO N S N O T RA N SA C T IO N ( M S) T RA N SA C T IO N ( M S)

1 21 26

10 220 56

100 2145 341

1000 21479 2756

NOTE
Results are not benchmarks. See the note about timing results in this article.

Based on the previous test results, wrapping a single operation in a transaction actually decreases performance.
But as you increase the number of operations within a single transaction, the performance improvement
becomes more marked. The performance difference is also more noticeable when all operations occur within
the Microsoft Azure datacenter. The increased latency of using Azure SQL Database or Azure SQL Managed
Instance from outside the Microsoft Azure datacenter overshadows the performance gain of using transactions.
Although the use of transactions can increase performance, continue to observe best practices for transactions
and connections. Keep the transaction as short as possible, and close the database connection after the work
completes. The using statement in the previous example assures that the connection is closed when the
subsequent code block completes.
The previous example demonstrates that you can add a local transaction to any ADO.NET code with two lines.
Transactions offer a quick way to improve the performance of code that makes sequential insert, update, and
delete operations. However, for the fastest performance, consider changing the code further to take advantage
of client-side batching, such as table-valued parameters.
For more information about transactions in ADO.NET, see Local Transactions in ADO.NET.
Table -valued parameters
Table-valued parameters support user-defined table types as parameters in Transact-SQL statements, stored
procedures, and functions. This client-side batching technique allows you to send multiple rows of data within
the table-valued parameter. To use table-valued parameters, first define a table type. The following Transact-SQL
statement creates a table type named MyTableType .

CREATE TYPE MyTableType AS TABLE


( mytext TEXT,
num INT );

In code, you create a DataTable with the exact same names and types of the table type. Pass this DataTable in a
parameter in a text query or stored procedure call. The following example shows this technique:

using (SqlConnection connection = new


SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();

DataTable table = new DataTable();


// Add columns and rows. The following is a simple example.
table.Columns.Add("mytext", typeof(string));
table.Columns.Add("num", typeof(int));
for (var i = 0; i < 10; i++)
{
table.Rows.Add(DateTime.Now.ToString(), DateTime.Now.Millisecond);
}

SqlCommand cmd = new SqlCommand(


"INSERT INTO MyTable(mytext, num) SELECT mytext, num FROM @TestTvp",
connection);

cmd.Parameters.Add(
new SqlParameter()
{
ParameterName = "@TestTvp",
SqlDbType = SqlDbType.Structured,
TypeName = "MyTableType",
Value = table,
});

cmd.ExecuteNonQuery();
}

In the previous example, the SqlCommand object inserts rows from a table-valued parameter, @TestTvp . The
previously created DataTable object is assigned to this parameter with the SqlCommand.Parameters.Add
method. Batching the inserts in one call significantly increases the performance over sequential inserts.
To improve the previous example further, use a stored procedure instead of a text-based command. The
following Transact-SQL command creates a stored procedure that takes the SimpleTestTableType table-valued
parameter.

CREATE PROCEDURE [dbo].[sp_InsertRows]


@TestTvp as MyTableType READONLY
AS
BEGIN
INSERT INTO MyTable(mytext, num)
SELECT mytext, num FROM @TestTvp
END
GO
Then change the SqlCommand object declaration in the previous code example to the following.

SqlCommand cmd = new SqlCommand("sp_InsertRows", connection);


cmd.CommandType = CommandType.StoredProcedure;

In most cases, table-valued parameters have equivalent or better performance than other batching techniques.
Table-valued parameters are often preferable, because they are more flexible than other options. For example,
other techniques, such as SQL bulk copy, only permit the insertion of new rows. But with table-valued
parameters, you can use logic in the stored procedure to determine which rows are updates and which are
inserts. The table type can also be modified to contain an "Operation" column that indicates whether the
specified row should be inserted, updated, or deleted.
The following table shows ad hoc test results for the use of table-valued parameters in milliseconds.

O P ERAT IO N S O N - P REM ISES TO A Z URE ( M S) A Z URE SA M E DATA C EN T ER ( M S)

1 124 32

10 131 25

100 338 51

1000 2615 382

10000 23830 3586

NOTE
Results are not benchmarks. See the note about timing results in this article.

The performance gain from batching is immediately apparent. In the previous sequential test, 1000 operations
took 129 seconds outside the datacenter and 21 seconds from within the datacenter. But with table-valued
parameters, 1000 operations take only 2.6 seconds outside the datacenter and 0.4 seconds within the
datacenter.
For more information on table-valued parameters, see Table-Valued Parameters.
SQL bulk copy
SQL bulk copy is another way to insert large amounts of data into a target database. .NET applications can use
the SqlBulkCopy class to perform bulk insert operations. SqlBulkCopy is similar in function to the command-
line tool, Bcp.exe , or the Transact-SQL statement, BULK INSERT . The following code example shows how to
bulk copy the rows in the source DataTable , table, to the destination table, MyTable.
using (SqlConnection connection = new
SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();

using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))


{
bulkCopy.DestinationTableName = "MyTable";
bulkCopy.ColumnMappings.Add("mytext", "mytext");
bulkCopy.ColumnMappings.Add("num", "num");
bulkCopy.WriteToServer(table);
}
}

There are some cases where bulk copy is preferred over table-valued parameters. See the comparison table of
Table-Valued parameters versus BULK INSERT operations in the article Table-Valued Parameters.
The following ad hoc test results show the performance of batching with SqlBulkCopy in milliseconds.

O P ERAT IO N S O N - P REM ISES TO A Z URE ( M S) A Z URE SA M E DATA C EN T ER ( M S)

1 433 57

10 441 32

100 636 53

1000 2535 341

10000 21605 2737

NOTE
Results are not benchmarks. See the note about timing results in this article.

In smaller batch sizes, the use table-valued parameters outperformed the SqlBulkCopy class. However,
SqlBulkCopy performed 12-31% faster than table-valued parameters for the tests of 1,000 and 10,000 rows.
Like table-valued parameters, SqlBulkCopy is a good option for batched inserts, especially when compared to
the performance of non-batched operations.
For more information on bulk copy in ADO.NET, see Bulk Copy Operations.
Multiple -row parameterized INSERT statements
One alternative for small batches is to construct a large parameterized INSERT statement that inserts multiple
rows. The following code example demonstrates this technique.
using (SqlConnection connection = new
SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();

string insertCommand = "INSERT INTO [MyTable] ( mytext, num ) " +


"VALUES (@p1, @p2), (@p3, @p4), (@p5, @p6), (@p7, @p8), (@p9, @p10)";

SqlCommand cmd = new SqlCommand(insertCommand, connection);

for (int i = 1; i <= 10; i += 2)


{
cmd.Parameters.Add(new SqlParameter("@p" + i.ToString(), "test"));
cmd.Parameters.Add(new SqlParameter("@p" + (i+1).ToString(), i));
}

cmd.ExecuteNonQuery();
}

This example is meant to show the basic concept. A more realistic scenario would loop through the required
entities to construct the query string and the command parameters simultaneously. You are limited to a total of
2100 query parameters, so this limits the total number of rows that can be processed in this manner.
The following ad hoc test results show the performance of this type of insert statement in milliseconds.

O P ERAT IO N S TA B L E- VA L UED PA RA M ET ERS ( M S) SIN GL E- STAT EM EN T IN SERT ( M S)

1 32 20

10 30 25

100 33 51

NOTE
Results are not benchmarks. See the note about timing results in this article.

This approach can be slightly faster for batches that are less than 100 rows. Although the improvement is small,
this technique is another option that might work well in your specific application scenario.
DataAdapter
The DataAdapter class allows you to modify a DataSet object and then submit the changes as INSERT,
UPDATE, and DELETE operations. If you are using the DataAdapter in this manner, it is important to note that
separate calls are made for each distinct operation. To improve performance, use the UpdateBatchSize
property to the number of operations that should be batched at a time. For more information, see Performing
Batch Operations Using DataAdapters.
Entity Framework
Entity Framework Core supports batching.
XML
For completeness, we feel that it is important to talk about XML as a batching strategy. However, the use of XML
has no advantages over other methods and several disadvantages. The approach is similar to table-valued
parameters, but an XML file or string is passed to a stored procedure instead of a user-defined table. The stored
procedure parses the commands in the stored procedure.
There are several disadvantages to this approach:
Working with XML can be cumbersome and error prone.
Parsing the XML on the database can be CPU-intensive.
In most cases, this method is slower than table-valued parameters.
For these reasons, the use of XML for batch queries is not recommended.

Batching considerations
The following sections provide more guidance for the use of batching in Azure SQL Database and Azure SQL
Managed Instance applications.
Tradeoffs
Depending on your architecture, batching can involve a tradeoff between performance and resiliency. For
example, consider the scenario where your role unexpectedly goes down. If you lose one row of data, the impact
is smaller than the impact of losing a large batch of unsubmitted rows. There is a greater risk when you buffer
rows before sending them to the database in a specified time window.
Because of this tradeoff, evaluate the type of operations that you batch. Batch more aggressively (larger batches
and longer time windows) with data that is less critical.
Batch size
In our tests, there was typically no advantage to breaking large batches into smaller chunks. In fact, this
subdivision often resulted in slower performance than submitting a single large batch. For example, consider a
scenario where you want to insert 1000 rows. The following table shows how long it takes to use table-valued
parameters to insert 1000 rows when divided into smaller batches.

B ATC H SIZ E IT ERAT IO N S TA B L E- VA L UED PA RA M ET ERS ( M S)

1000 1 347

500 2 355

100 10 465

50 20 630

NOTE
Results are not benchmarks. See the note about timing results in this article.

You can see that the best performance for 1000 rows is to submit them all at once. In other tests (not shown
here), there was a small performance gain to break a 10000-row batch into two batches of 5000. But the table
schema for these tests is relatively simple, so you should perform tests on your specific data and batch sizes to
verify these findings.
Another factor to consider is that if the total batch becomes too large, Azure SQL Database or Azure SQL
Managed Instance might throttle and refuse to commit the batch. For the best results, test your specific scenario
to determine if there is an ideal batch size. Make the batch size configurable at runtime to enable quick
adjustments based on performance or errors.
Finally, balance the size of the batch with the risks associated with batching. If there are transient errors or the
role fails, consider the consequences of retrying the operation or of losing the data in the batch.
Parallel processing
What if you took the approach of reducing the batch size but used multiple threads to execute the work? Again,
our tests showed that several smaller multithreaded batches typically performed worse than a single larger
batch. The following test attempts to insert 1000 rows in one or more parallel batches. This test shows how
more simultaneous batches actually decreased performance.

B ATC H SIZ E [ IT ERAT IO N S] T W O T H REA DS ( M S) F O UR T H REA DS ( M S) SIX T H REA DS ( M S)

1000 [1] 277 315 266

500 [2] 548 278 256

250 [4] 405 329 265

100 [10] 488 439 391

NOTE
Results are not benchmarks. See the note about timing results in this article.

There are several potential reasons for the degradation in performance due to parallelism:
There are multiple simultaneous network calls instead of one.
Multiple operations against a single table can result in contention and blocking.
There are overheads associated with multithreading.
The expense of opening multiple connections outweighs the benefit of parallel processing.
If you target different tables or databases, it is possible to see some performance gain with this strategy.
Database sharding or federations would be a scenario for this approach. Sharding uses multiple databases and
routes different data to each database. If each small batch is going to a different database, then performing the
operations in parallel can be more efficient. However, the performance gain is not significant enough to use as
the basis for a decision to use database sharding in your solution.
In some designs, parallel execution of smaller batches can result in improved throughput of requests in a system
under load. In this case, even though it is quicker to process a single larger batch, processing multiple batches in
parallel might be more efficient.
If you do use parallel execution, consider controlling the maximum number of worker threads. A smaller
number might result in less contention and a faster execution time. Also, consider the additional load that this
places on the target database both in connections and transactions.
Related performance factors
Typical guidance on database performance also affects batching. For example, insert performance is reduced for
tables that have a large primary key or many nonclustered indexes.
If table-valued parameters use a stored procedure, you can use the command SET NOCOUNT ON at the
beginning of the procedure. This statement suppresses the return of the count of the affected rows in the
procedure. However, in our tests, the use of SET NOCOUNT ON either had no effect or decreased
performance. The test stored procedure was simple with a single INSERT command from the table-valued
parameter. It is possible that more complex stored procedures would benefit from this statement. But don't
assume that adding SET NOCOUNT ON to your stored procedure automatically improves performance. To
understand the effect, test your stored procedure with and without the SET NOCOUNT ON statement.
Batching scenarios
The following sections describe how to use table-valued parameters in three application scenarios. The first
scenario shows how buffering and batching can work together. The second scenario improves performance by
performing master-detail operations in a single stored procedure call. The final scenario shows how to use
table-valued parameters in an "UPSERT" operation.
Buffering
Although there are some scenarios that are obvious candidate for batching, there are many scenarios that could
take advantage of batching by delayed processing. However, delayed processing also carries a greater risk that
the data is lost in the event of an unexpected failure. It is important to understand this risk and consider the
consequences.
For example, consider a web application that tracks the navigation history of each user. On each page request,
the application could make a database call to record the user's page view. But higher performance and scalability
can be achieved by buffering the users' navigation activities and then sending this data to the database in
batches. You can trigger the database update by elapsed time and/or buffer size. For example, a rule could
specify that the batch should be processed after 20 seconds or when the buffer reaches 1000 items.
The following code example uses Reactive Extensions - Rx to process buffered events raised by a monitoring
class. When the buffer fills or a timeout is reached, the batch of user data is sent to the database with a table-
valued parameter.
The following NavHistoryData class models the user navigation details. It contains basic information such as the
user identifier, the URL accessed, and the access time.

public class NavHistoryData


{
public NavHistoryData(int userId, string url, DateTime accessTime)
{ UserId = userId; URL = url; AccessTime = accessTime; }
public int UserId { get; set; }
public string URL { get; set; }
public DateTime AccessTime { get; set; }
}

The NavHistoryDataMonitor class is responsible for buffering the user navigation data to the database. It
contains a method, RecordUserNavigationEntry, which responds by raising an OnAdded event. The following
code shows the constructor logic that uses Rx to create an observable collection based on the event. It then
subscribes to this observable collection with the Buffer method. The overload specifies that the buffer should be
sent every 20 seconds or 1000 entries.

public NavHistoryDataMonitor()
{
var observableData =
Observable.FromEventPattern<NavHistoryDataEventArgs>(this, "OnAdded");

observableData.Buffer(TimeSpan.FromSeconds(20), 1000).Subscribe(Handler);
}

The handler converts all of the buffered items into a table-valued type and then passes this type to a stored
procedure that processes the batch. The following code shows the complete definition for both the
NavHistoryDataEventArgs and the NavHistoryDataMonitor classes.
public class NavHistoryDataEventArgs : System.EventArgs
{
public NavHistoryDataEventArgs(NavHistoryData data) { Data = data; }
public NavHistoryData Data { get; set; }
}

public class NavHistoryDataMonitor


{
public event EventHandler<NavHistoryDataEventArgs> OnAdded;

public NavHistoryDataMonitor()
{
var observableData =
Observable.FromEventPattern<NavHistoryDataEventArgs>(this, "OnAdded");

observableData.Buffer(TimeSpan.FromSeconds(20), 1000).Subscribe(Handler);
}

The handler converts all of the buffered items into a table-valued type and then passes this type to a stored
procedure that processes the batch. The following code shows the complete definition for both the
NavHistoryDataEventArgs and the NavHistoryDataMonitor classes.

public class NavHistoryDataEventArgs : System.EventArgs


{
if (OnAdded != null)
OnAdded(this, new NavHistoryDataEventArgs(data));
}

protected void Handler(IList<EventPattern<NavHistoryDataEventArgs>> items)


{
DataTable navHistoryBatch = new DataTable("NavigationHistoryBatch");
navHistoryBatch.Columns.Add("UserId", typeof(int));
navHistoryBatch.Columns.Add("URL", typeof(string));
navHistoryBatch.Columns.Add("AccessTime", typeof(DateTime));
foreach (EventPattern<NavHistoryDataEventArgs> item in items)
{
NavHistoryData data = item.EventArgs.Data;
navHistoryBatch.Rows.Add(data.UserId, data.URL, data.AccessTime);
}

using (SqlConnection connection = new


SqlConnection(CloudConfigurationManager.GetSetting("Sql.ConnectionString")))
{
connection.Open();

SqlCommand cmd = new SqlCommand("sp_RecordUserNavigation", connection);


cmd.CommandType = CommandType.StoredProcedure;

cmd.Parameters.Add(
new SqlParameter()
{
ParameterName = "@NavHistoryBatch",
SqlDbType = SqlDbType.Structured,
TypeName = "NavigationHistoryTableType",
Value = navHistoryBatch,
});

cmd.ExecuteNonQuery();
}
}
}

To use this buffering class, the application creates a static NavHistoryDataMonitor object. Each time a user
accesses a page, the application calls the NavHistoryDataMonitor.RecordUserNavigationEntry method. The
buffering logic proceeds to take care of sending these entries to the database in batches.
Master detail
Table-valued parameters are useful for simple INSERT scenarios. However, it can be more challenging to batch
inserts that involve more than one table. The "master/detail" scenario is a good example. The master table
identifies the primary entity. One or more detail tables store more data about the entity. In this scenario, foreign
key relationships enforce the relationship of details to a unique master entity. Consider a simplified version of a
PurchaseOrder table and its associated OrderDetail table. The following Transact-SQL creates the PurchaseOrder
table with four columns: OrderID, OrderDate, CustomerID, and Status.

CREATE TABLE [dbo].[PurchaseOrder](


[OrderID] [int] IDENTITY(1,1) NOT NULL,
[OrderDate] [datetime] NOT NULL,
[CustomerID] [int] NOT NULL,
[Status] [nvarchar](50) NOT NULL,
CONSTRAINT [PrimaryKey_PurchaseOrder]
PRIMARY KEY CLUSTERED ( [OrderID] ASC ))

Each order contains one or more product purchases. This information is captured in the PurchaseOrderDetail
table. The following Transact-SQL creates the PurchaseOrderDetail table with five columns: OrderID,
OrderDetailID, ProductID, UnitPrice, and OrderQty.

CREATE TABLE [dbo].[PurchaseOrderDetail](


[OrderID] [int] NOT NULL,
[OrderDetailID] [int] IDENTITY(1,1) NOT NULL,
[ProductID] [int] NOT NULL,
[UnitPrice] [money] NULL,
[OrderQty] [smallint] NULL,
CONSTRAINT [PrimaryKey_PurchaseOrderDetail] PRIMARY KEY CLUSTERED
( [OrderID] ASC, [OrderDetailID] ASC ))

The OrderID column in the PurchaseOrderDetail table must reference an order from the PurchaseOrder table.
The following definition of a foreign key enforces this constraint.

ALTER TABLE [dbo].[PurchaseOrderDetail] WITH CHECK ADD


CONSTRAINT [FK_OrderID_PurchaseOrder] FOREIGN KEY([OrderID])
REFERENCES [dbo].[PurchaseOrder] ([OrderID])

In order to use table-valued parameters, you must have one user-defined table type for each target table.

CREATE TYPE PurchaseOrderTableType AS TABLE


( OrderID INT,
OrderDate DATETIME,
CustomerID INT,
Status NVARCHAR(50) );
GO

CREATE TYPE PurchaseOrderDetailTableType AS TABLE


( OrderID INT,
ProductID INT,
UnitPrice MONEY,
OrderQty SMALLINT );
GO

Then define a stored procedure that accepts tables of these types. This procedure allows an application to locally
batch a set of orders and order details in a single call. The following Transact-SQL provides the complete stored
procedure declaration for this purchase order example.
CREATE PROCEDURE sp_InsertOrdersBatch (
@orders as PurchaseOrderTableType READONLY,
@details as PurchaseOrderDetailTableType READONLY )
AS
SET NOCOUNT ON;

-- Table that connects the order identifiers in the @orders


-- table with the actual order identifiers in the PurchaseOrder table
DECLARE @IdentityLink AS TABLE (
SubmittedKey int,
ActualKey int,
RowNumber int identity(1,1)
);

-- Add new orders to the PurchaseOrder table, storing the actual


-- order identifiers in the @IdentityLink table
INSERT INTO PurchaseOrder ([OrderDate], [CustomerID], [Status])
OUTPUT inserted.OrderID INTO @IdentityLink (ActualKey)
SELECT [OrderDate], [CustomerID], [Status] FROM @orders ORDER BY OrderID;

-- Match the passed-in order identifiers with the actual identifiers


-- and complete the @IdentityLink table for use with inserting the details
WITH OrderedRows As (
SELECT OrderID, ROW_NUMBER () OVER (ORDER BY OrderID) As RowNumber
FROM @orders
)
UPDATE @IdentityLink SET SubmittedKey = M.OrderID
FROM @IdentityLink L JOIN OrderedRows M ON L.RowNumber = M.RowNumber;

-- Insert the order details into the PurchaseOrderDetail table,


-- using the actual order identifiers of the master table, PurchaseOrder
INSERT INTO PurchaseOrderDetail (
[OrderID],
[ProductID],
[UnitPrice],
[OrderQty] )
SELECT L.ActualKey, D.ProductID, D.UnitPrice, D.OrderQty
FROM @details D
JOIN @IdentityLink L ON L.SubmittedKey = D.OrderID;
GO

In this example, the locally defined @IdentityLink table stores the actual OrderID values from the newly inserted
rows. These order identifiers are different from the temporary OrderID values in the @orders and @details
table-valued parameters. For this reason, the @IdentityLink table then connects the OrderID values from the
@orders parameter to the real OrderID values for the new rows in the PurchaseOrder table. After this step, the
@IdentityLink table can facilitate inserting the order details with the actual OrderID that satisfies the foreign key
constraint.
This stored procedure can be used from code or from other Transact-SQL calls. See the table-valued parameters
section of this paper for a code example. The following Transact-SQL shows how to call the
sp_InsertOrdersBatch.
declare @orders as PurchaseOrderTableType
declare @details as PurchaseOrderDetailTableType

INSERT @orders
([OrderID], [OrderDate], [CustomerID], [Status])
VALUES(1, '1/1/2013', 1125, 'Complete'),
(2, '1/13/2013', 348, 'Processing'),
(3, '1/12/2013', 2504, 'Shipped')

INSERT @details
([OrderID], [ProductID], [UnitPrice], [OrderQty])
VALUES(1, 10, $11.50, 1),
(1, 12, $1.58, 1),
(2, 23, $2.57, 2),
(3, 4, $10.00, 1)

exec sp_InsertOrdersBatch @orders, @details

This solution allows each batch to use a set of OrderID values that begin at 1. These temporary OrderID values
describe the relationships in the batch, but the actual OrderID values are determined at the time of the insert
operation. You can run the same statements in the previous example repeatedly and generate unique orders in
the database. For this reason, consider adding more code or database logic that prevents duplicate orders when
using this batching technique.
This example demonstrates that even more complex database operations, such as master-detail operations, can
be batched using table-valued parameters.
UPSERT
Another batching scenario involves simultaneously updating existing rows and inserting new rows. This
operation is sometimes referred to as an "UPSERT" (update + insert) operation. Rather than making separate
calls to INSERT and UPDATE, the MERGE statement can be a suitable replacement. The MERGE statement can
perform both insert and update operations in a single call. The MERGE statement locking mechanics work
differently from separate INSERT and UPDATE statements. Test your specific workloads before deploying to
production.
Table-valued parameters can be used with the MERGE statement to perform updates and inserts. For example,
consider a simplified Employee table that contains the following columns: EmployeeID, FirstName, LastName,
SocialSecurityNumber:

CREATE TABLE [dbo].[Employee](


[EmployeeID] [int] IDENTITY(1,1) NOT NULL,
[FirstName] [nvarchar](50) NOT NULL,
[LastName] [nvarchar](50) NOT NULL,
[SocialSecurityNumber] [nvarchar](50) NOT NULL,
CONSTRAINT [PrimaryKey_Employee] PRIMARY KEY CLUSTERED
([EmployeeID] ASC ))

In this example, you can use the fact that the SocialSecurityNumber is unique to perform a MERGE of multiple
employees. First, create the user-defined table type:

CREATE TYPE EmployeeTableType AS TABLE


( Employee_ID INT,
FirstName NVARCHAR(50),
LastName NVARCHAR(50),
SocialSecurityNumber NVARCHAR(50) );
GO

Next, create a stored procedure or write code that uses the MERGE statement to perform the update and insert.
The following example uses the MERGE statement on a table-valued parameter, @employees, of type
EmployeeTableType. The contents of the @employees table are not shown here.

MERGE Employee AS target


USING (SELECT [FirstName], [LastName], [SocialSecurityNumber] FROM @employees)
AS source ([FirstName], [LastName], [SocialSecurityNumber])
ON (target.[SocialSecurityNumber] = source.[SocialSecurityNumber])
WHEN MATCHED THEN
UPDATE SET
target.FirstName = source.FirstName,
target.LastName = source.LastName
WHEN NOT MATCHED THEN
INSERT ([FirstName], [LastName], [SocialSecurityNumber])
VALUES (source.[FirstName], source.[LastName], source.[SocialSecurityNumber]);

For more information, see the documentation and examples for the MERGE statement. Although the same work
could be performed in a multiple-step stored procedure call with separate INSERT and UPDATE operations, the
MERGE statement is more efficient. Database code can also construct Transact-SQL calls that use the MERGE
statement directly without requiring two database calls for INSERT and UPDATE.

Recommendation summary
The following list provides a summary of the batching recommendations discussed in this article:
Use buffering and batching to increase the performance and scalability of Azure SQL Database and Azure
SQL Managed Instance applications.
Understand the tradeoffs between batching/buffering and resiliency. During a role failure, the risk of losing
an unprocessed batch of business-critical data might outweigh the performance benefit of batching.
Attempt to keep all calls to the database within a single datacenter to reduce latency.
If you choose a single batching technique, table-valued parameters offer the best performance and flexibility.
For the fastest insert performance, follow these general guidelines but test your scenario:
For < 100 rows, use a single parameterized INSERT command.
For < 1000 rows, use table-valued parameters.
For >= 1000 rows, use SqlBulkCopy.
For update and delete operations, use table-valued parameters with stored procedure logic that determines
the correct operation on each row in the table parameter.
Batch size guidelines:
Use the largest batch sizes that make sense for your application and business requirements.
Balance the performance gain of large batches with the risks of temporary or catastrophic failures.
What is the consequence of retries or loss of the data in the batch?
Test the largest batch size to verify that Azure SQL Database or Azure SQL Managed Instance does not
reject it.
Create configuration settings that control batching, such as the batch size or the buffering time
window. These settings provide flexibility. You can change the batching behavior in production without
redeploying the cloud service.
Avoid parallel execution of batches that operate on a single table in one database. If you do choose to divide
a single batch across multiple worker threads, run tests to determine the ideal number of threads. After an
unspecified threshold, more threads will decrease performance rather than increase it.
Consider buffering on size and time as a way of implementing batching for more scenarios.

Next steps
This article focused on how database design and coding techniques related to batching can improve your
application performance and scalability. But this is just one factor in your overall strategy. For more ways to
improve performance and scalability, see Database performance guidance and Price and performance
considerations for an elastic pool.
Load data from CSV into Azure SQL Database or
SQL Managed Instance (flat files)
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can use the bcp command-line utility to import data from a CSV file into Azure SQL Database or Azure SQL
Managed Instance.

Before you begin


Prerequisites
To complete the steps in this article, you need:
A database in Azure SQL Database
The bcp command-line utility installed
The sqlcmd command-line utility installed
You can download the bcp and sqlcmd utilities from the Microsoft sqlcmd Documentation.
Data in ASCII or UTF -16 format
If you are trying this tutorial with your own data, your data needs to use the ASCII or UTF-16 encoding since bcp
does not support UTF-8.

1. Create a destination table


Define a table in SQL Database as the destination table. The columns in the table must correspond to the data in
each row of your data file.
To create a table, open a command prompt and use sqlcmd.exe to run the following command:

sqlcmd.exe -S <server name> -d <database name> -U <username> -P <password> -I -Q "


CREATE TABLE DimDate2
(
DateId INT NOT NULL,
CalendarQuarter TINYINT NOT NULL,
FiscalQuarter TINYINT NOT NULL
)
;
"

2. Create a source data file


Open Notepad and copy the following lines of data into a new text file and then save this file to your local temp
directory, C:\Temp\DimDate2.txt. This data is in ASCII format.
20150301,1,3
20150501,2,4
20151001,4,2
20150201,1,3
20151201,4,2
20150801,3,1
20150601,2,4
20151101,4,2
20150401,2,4
20150701,3,1
20150901,3,1
20150101,1,3

(Optional) To export your own data from a SQL Server database, open a command prompt and run the
following command. Replace TableName, ServerName, DatabaseName, Username, and Password with your own
information.

bcp <TableName> out C:\Temp\DimDate2_export.txt -S <ServerName> -d <DatabaseName> -U <Username> -P


<Password> -q -c -t ","

3. Load the data


To load the data, open a command prompt and run the following command, replacing the values for Server
Name, Database name, Username, and Password with your own information.

bcp DimDate2 in C:\Temp\DimDate2.txt -S <ServerName> -d <DatabaseName> -U <Username> -P <password> -q -c -t


","

Use this command to verify the data was loaded properly

sqlcmd.exe -S <server name> -d <database name> -U <username> -P <password> -I -Q "SELECT * FROM DimDate2
ORDER BY 1;"

The results should look like this:

DAT EID C A L EN DA RQ UA RT ER F ISC A LQ UA RT ER

20150101 1 3

20150201 1 3

20150301 1 3

20150401 2 4

20150501 2 4

20150601 2 4

20150701 3 1

20150801 3 1
DAT EID C A L EN DA RQ UA RT ER F ISC A LQ UA RT ER

20150801 3 1

20151001 4 2

20151101 4 2

20151201 4 2

Next steps
To migrate a SQL Server database, see SQL Server database migration.
Tune applications and databases for performance in
Azure SQL Database and Azure SQL Managed
Instance
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Once you have identified a performance issue that you are facing with Azure SQL Database and Azure SQL
Managed Instance, this article is designed to help you:
Tune your application and apply some best practices that can improve performance.
Tune the database by changing indexes and queries to more efficiently work with data.
This article assumes that you have already worked through the Azure SQL Database database advisor
recommendations and the Azure SQL Database auto-tuning recommendations, if applicable. It also assumes
that you have reviewed the overview of monitoring and tuning and its related articles related to troubleshooting
performance issues. Additionally, this article assumes that you do not have a CPU resources, running-related
performance issue that can be resolved by increasing the compute size or service tier to provide more resources
to your database.

Tune your application


In traditional on-premises SQL Server, the process of initial capacity planning often is separated from the
process of running an application in production. Hardware and product licenses are purchased first, and
performance tuning is done afterward. When you use Azure SQL, it's a good idea to interweave the process of
running an application and tuning it. With the model of paying for capacity on demand, you can tune your
application to use the minimum resources needed now, instead of over-provisioning on hardware based on
guesses of future growth plans for an application, which often are incorrect. Some customers might choose not
to tune an application, and instead choose to over-provision hardware resources. This approach might be a
good idea if you don't want to change a key application during a busy period. But, tuning an application can
minimize resource requirements and lower monthly bills when you use the service tiers in Azure SQL Database
and Azure SQL Managed Instance.
Application characteristics
Although Azure SQL Database and Azure SQL Managed Instance service tiers are designed to improve
performance stability and predictability for an application, some best practices can help you tune your
application to better take advantage of the resources at a compute size. Although many applications have
significant performance gains simply by switching to a higher compute size or service tier, some applications
need additional tuning to benefit from a higher level of service. For increased performance, consider additional
application tuning for applications that have these characteristics:
Applications that have slow performance because of "chatty" behavior
Chatty applications make excessive data access operations that are sensitive to network latency. You
might need to modify these kinds of applications to reduce the number of data access operations to the
database. For example, you might improve application performance by using techniques like batching ad
hoc queries or moving the queries to stored procedures. For more information, see Batch queries.
Databases with an intensive workload that can't be suppor ted by an entire single machine
Databases that exceed the resources of the highest Premium compute size might benefit from scaling out
the workload. For more information, see Cross-database sharding and Functional partitioning.
Applications that have suboptimal queries
Applications, especially those in the data access layer, that have poorly tuned queries might not benefit
from a higher compute size. This includes queries that lack a WHERE clause, have missing indexes, or
have outdated statistics. These applications benefit from standard query performance-tuning techniques.
For more information, see Missing indexes and Query tuning and hinting.
Applications that have suboptimal data access design
Applications that have inherent data access concurrency issues, for example deadlocking, might not
benefit from a higher compute size. Consider reducing round trips against the database by caching data
on the client side with the Azure Caching service or another caching technology. See Application tier
caching.
To prevent deadlocks from reoccurring in Azure SQL Database, see Analyze and prevent deadlocks in
Azure SQL Database. For Azure SQL Managed Instance, refer to the Deadlocks of the Transaction locking
and row versioning guide.

Tune your database


In this section, we look at some techniques that you can use to tune database to gain the best performance for
your application and run it at the lowest possible compute size. Some of these techniques match traditional SQL
Server tuning best practices, but others are specific to Azure SQL Database and Azure SQL Managed Instance. In
some cases, you can examine the consumed resources for a database to find areas to further tune and extend
traditional SQL Server techniques to work in Azure SQL Database and Azure SQL Managed Instance.
Identifying and adding missing indexes
A common problem in OLTP database performance relates to the physical database design. Often, database
schemas are designed and shipped without testing at scale (either in load or in data volume). Unfortunately, the
performance of a query plan might be acceptable on a small scale but degrade substantially under production-
level data volumes. The most common source of this issue is the lack of appropriate indexes to satisfy filters or
other restrictions in a query. Often, missing indexes manifests as a table scan when an index seek could suffice.
In this example, the selected query plan uses a scan when a seek would suffice:

DROP TABLE dbo.missingindex;


CREATE TABLE dbo.missingindex (col1 INT IDENTITY PRIMARY KEY, col2 INT);
DECLARE @a int = 0;
SET NOCOUNT ON;
BEGIN TRANSACTION
WHILE @a < 20000
BEGIN
INSERT INTO dbo.missingindex(col2) VALUES (@a);
SET @a += 1;
END
COMMIT TRANSACTION;
GO
SELECT m1.col1
FROM dbo.missingindex m1 INNER JOIN dbo.missingindex m2 ON(m1.col1=m2.col1)
WHERE m1.col2 = 4;
Azure SQL Database and Azure SQL Managed Instance can help you find and fix common missing index
conditions. DMVs that are built into Azure SQL Database and Azure SQL Managed Instance look at query
compilations in which an index would significantly reduce the estimated cost to run a query. During query
execution, the database engine tracks how often each query plan is executed, and tracks the estimated gap
between the executing query plan and the imagined one where that index existed. You can use these DMVs to
quickly guess which changes to your physical database design might improve overall workload cost for a
database and its real workload.
You can use this query to evaluate potential missing indexes:

SELECT
CONVERT (varchar, getdate(), 126) AS runtime
, mig.index_group_handle
, mid.index_handle
, CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact *
(migs.user_seeks + migs.user_scans)) AS improvement_measure
, 'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' +
CONVERT (varchar, mid.index_handle) + ' ON ' + mid.statement + '
(' + ISNULL (mid.equality_columns,'')
+ CASE WHEN mid.equality_columns IS NOT NULL
AND mid.inequality_columns IS NOT NULL
THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '') + ')'
+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement
, migs.*
, mid.database_id
, mid.[object_id]
FROM sys.dm_db_missing_index_groups AS mig
INNER JOIN sys.dm_db_missing_index_group_stats AS migs
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid
ON mig.index_handle = mid.index_handle
ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC

In this example, the query resulted in this suggestion:

CREATE INDEX missing_index_5006_5005 ON [dbo].[missingindex] ([col2])

After it's created, that same SELECT statement picks a different plan, which uses a seek instead of a scan, and
then executes the plan more efficiently:

The key insight is that the IO capacity of a shared, commodity system is more limited than that of a dedicated
server machine. There's a premium on minimizing unnecessary IO to take maximum advantage of the system in
the resources of each compute size of the service tiers. Appropriate physical database design choices can
significantly improve the latency for individual queries, improve the throughput of concurrent requests handled
per scale unit, and minimize the costs required to satisfy the query.
For more information about tuning indexes using missing index requests, see Tune nonclustered indexes with
missing index suggestions.
Query tuning and hinting
The query optimizer in Azure SQL Database and Azure SQL Managed Instance is similar to the traditional SQL
Server query optimizer. Most of the best practices for tuning queries and understanding the reasoning model
limitations for the query optimizer also apply to Azure SQL Database and Azure SQL Managed Instance. If you
tune queries in Azure SQL Database and Azure SQL Managed Instance, you might get the additional benefit of
reducing aggregate resource demands. Your application might be able to run at a lower cost than an un-tuned
equivalent because it can run at a lower compute size.
An example that is common in SQL Server and which also applies to Azure SQL Database and Azure SQL
Managed Instance is how the query optimizer "sniffs" parameters. During compilation, the query optimizer
evaluates the current value of a parameter to determine whether it can generate a more optimal query plan.
Although this strategy often can lead to a query plan that is significantly faster than a plan compiled without
known parameter values, currently it works imperfectly both in SQL Server, in Azure SQL Database, and Azure
SQL Managed Instance. Sometimes the parameter is not sniffed, and sometimes the parameter is sniffed but the
generated plan is suboptimal for the full set of parameter values in a workload. Microsoft includes query hints
(directives) so that you can specify intent more deliberately and override the default behavior of parameter
sniffing. Often, if you use hints, you can fix cases in which the default SQL Server, Azure SQL Database, and
Azure SQL Managed Instance behavior is imperfect for a specific customer workload.
The next example demonstrates how the query processor can generate a plan that is suboptimal both for
performance and resource requirements. This example also shows that if you use a query hint, you can reduce
query run time and resource requirements for your database:
DROP TABLE psptest1;
CREATE TABLE psptest1(col1 int primary key identity, col2 int, col3 binary(200));
DECLARE @a int = 0;
SET NOCOUNT ON;
BEGIN TRANSACTION
WHILE @a < 20000
BEGIN
INSERT INTO psptest1(col2) values (1);
INSERT INTO psptest1(col2) values (@a);
SET @a += 1;
END
COMMIT TRANSACTION
CREATE INDEX i1 on psptest1(col2);
GO

CREATE PROCEDURE psp1 (@param1 int)


AS
BEGIN
INSERT INTO t1 SELECT * FROM psptest1
WHERE col2 = @param1
ORDER BY col2;
END
GO

CREATE PROCEDURE psp2 (@param2 int)


AS
BEGIN
INSERT INTO t1 SELECT * FROM psptest1 WHERE col2 = @param2
ORDER BY col2
OPTION (OPTIMIZE FOR (@param2 UNKNOWN))
END
GO

CREATE TABLE t1 (col1 int primary key, col2 int, col3 binary(200));
GO

The setup code creates a table that has skewed data distribution. The optimal query plan differs based on which
parameter is selected. Unfortunately, the plan caching behavior doesn't always recompile the query based on
the most common parameter value. So, it's possible for a suboptimal plan to be cached and used for many
values, even when a different plan might be a better plan choice on average. Then the query plan creates two
stored procedures that are identical, except that one has a special query hint.

-- Prime Procedure Cache with scan plan


EXEC psp1 @param1=1;
TRUNCATE TABLE t1;

-- Iterate multiple times to show the performance difference


DECLARE @i int = 0;
WHILE @i < 1000
BEGIN
EXEC psp1 @param1=2;
TRUNCATE TABLE t1;
SET @i += 1;
END

We recommend that you wait at least 10 minutes before you begin part 2 of the example, so that the results are
distinct in the resulting telemetry data.
EXEC psp2 @param2=1;
TRUNCATE TABLE t1;

DECLARE @i int = 0;
WHILE @i < 1000
BEGIN
EXEC psp2 @param2=2;
TRUNCATE TABLE t1;
SET @i += 1;
END

Each part of this example attempts to run a parameterized insert statement 1,000 times (to generate a sufficient
load to use as a test data set). When it executes stored procedures, the query processor examines the parameter
value that is passed to the procedure during its first compilation (parameter "sniffing"). The processor caches the
resulting plan and uses it for later invocations, even if the parameter value is different. The optimal plan might
not be used in all cases. Sometimes you need to guide the optimizer to pick a plan that is better for the average
case rather than the specific case from when the query was first compiled. In this example, the initial plan
generates a "scan" plan that reads all rows to find each value that matches the parameter:

Because we executed the procedure by using the value 1, the resulting plan was optimal for the value 1 but was
suboptimal for all other values in the table. The result likely isn't what you would want if you were to pick each
plan randomly, because the plan performs more slowly and uses more resources.
If you run the test with SET STATISTICS IO set to ON , the logical scan work in this example is done behind the
scenes. You can see that there are 1,148 reads done by the plan (which is inefficient, if the average case is to
return just one row):

The second part of the example uses a query hint to tell the optimizer to use a specific value during the
compilation process. In this case, it forces the query processor to ignore the value that is passed as the
parameter, and instead to assume UNKNOWN . This refers to a value that has the average frequency in the table
(ignoring skew). The resulting plan is a seek-based plan that is faster and uses fewer resources, on average, than
the plan in part 1 of this example:

You can see the effect in the sys.resource_stats table (there is a delay from the time that you execute the test
and when the data populates the table). For this example, part 1 executed during the 22:25:00 time window, and
part 2 executed at 22:35:00. The earlier time window used more resources in that time window than the later
one (because of plan efficiency improvements).
SELECT TOP 1000 *
FROM sys.resource_stats
WHERE database_name = 'resource1'
ORDER BY start_time DESC

NOTE
Although the volume in this example is intentionally small, the effect of suboptimal parameters can be substantial,
especially on larger databases. The difference, in extreme cases, can be between seconds for fast cases and hours for slow
cases.

You can examine sys.resource_stats to determine whether the resource for a test uses more or fewer
resources than another test. When you compare data, separate the timing of tests so that they are not in the
same 5-minute window in the sys.resource_stats view. The goal of the exercise is to minimize the total
amount of resources used, and not to minimize the peak resources. Generally, optimizing a piece of code for
latency also reduces resource consumption. Make sure that the changes you make to an application are
necessary, and that the changes don't negatively affect the customer experience for someone who might be
using query hints in the application.
If a workload has a set of repeating queries, often it makes sense to capture and validate the optimality of your
plan choices because it drives the minimum resource size unit required to host the database. After you validate
it, occasionally reexamine the plans to help you make sure that they have not degraded. You can learn more
about query hints (Transact-SQL).
Very large database architectures
Before the release of Hyperscale service tier for single databases in Azure SQL Database, customers used to hit
capacity limits for individual databases. These capacity limits still exist for pooled databases in Azure SQL
Database elastic pools and instance databases in Azure SQL Managed Instances. The following two sections
discuss two options for solving problems with very large databases in Azure SQL Database and Azure SQL
Managed Instance when you cannot use the Hyperscale service tier.
Cross-database sharding
Because Azure SQL Database and Azure SQL Managed Instance runs on commodity hardware, the capacity
limits for an individual database are lower than for a traditional on-premises SQL Server installation. Some
customers use sharding techniques to spread database operations over multiple databases when the operations
don't fit inside the limits of an individual database in Azure SQL Database and Azure SQL Managed Instance.
Most customers who use sharding techniques in Azure SQL Database and Azure SQL Managed Instance split
their data on a single dimension across multiple databases. For this approach, you need to understand that OLTP
applications often perform transactions that apply to only one row or to a small group of rows in the schema.

NOTE
Azure SQL Database now provides a library to assist with sharding. For more information, see Elastic Database client
library overview.

For example, if a database has customer name, order, and order details (like the traditional example Northwind
database that ships with SQL Server), you could split this data into multiple databases by grouping a customer
with the related order and order detail information. You can guarantee that the customer's data stays in an
individual database. The application would split different customers across databases, effectively spreading the
load across multiple databases. With sharding, customers not only can avoid the maximum database size limit,
but Azure SQL Database and Azure SQL Managed Instance also can process workloads that are significantly
larger than the limits of the different compute sizes, as long as each individual database fits into its service tier
limits.
Although database sharding doesn't reduce the aggregate resource capacity for a solution, it's highly effective at
supporting very large solutions that are spread over multiple databases. Each database can run at a different
compute size to support very large, "effective" databases with high resource requirements.
Functional partitioning
Users often combine many functions in an individual database. For example, if an application has logic to
manage inventory for a store, that database might have logic associated with inventory, tracking purchase
orders, stored procedures, and indexed or materialized views that manage end-of-month reporting. This
technique makes it easier to administer the database for operations like backup, but it also requires you to size
the hardware to handle the peak load across all functions of an application.
If you use a scale-out architecture in Azure SQL Database and Azure SQL Managed Instance, it's a good idea to
split different functions of an application into different databases. By using this technique, each application
scales independently. As an application becomes busier (and the load on the database increases), the
administrator can choose independent compute sizes for each function in the application. At the limit, with this
architecture, an application can be larger than a single commodity machine can handle because the load is
spread across multiple machines.
Batch queries
For applications that access data by using high-volume, frequent, ad hoc querying, a substantial amount of
response time is spent on network communication between the application tier and the database tier. Even when
both the application and the database are in the same data center, the network latency between the two might
be magnified by a large number of data access operations. To reduce the network round trips for the data access
operations, consider using the option to either batch the ad hoc queries, or to compile them as stored
procedures. If you batch the ad hoc queries, you can send multiple queries as one large batch in a single trip to
the database. If you compile ad hoc queries in a stored procedure, you could achieve the same result as if you
batch them. Using a stored procedure also gives you the benefit of increasing the chances of caching the query
plans in the database so you can use the stored procedure again.
Some applications are write-intensive. Sometimes you can reduce the total IO load on a database by considering
how to batch writes together. Often, this is as simple as using explicit transactions instead of auto-commit
transactions in stored procedures and ad hoc batches. For an evaluation of different techniques you can use, see
Batching techniques for database applications in Azure. Experiment with your own workload to find the right
model for batching. Be sure to understand that a model might have slightly different transactional consistency
guarantees. Finding the right workload that minimizes resource use requires finding the right combination of
consistency and performance trade-offs.
Application-tier caching
Some database applications have read-heavy workloads. Caching layers might reduce the load on the database
and might potentially reduce the compute size required to support a database by using Azure SQL Database
and Azure SQL Managed Instance. With Azure Cache for Redis, if you have a read-heavy workload, you can read
the data once (or perhaps once per application-tier machine, depending on how it is configured), and then store
that data outside of your database. This is a way to reduce database load (CPU and read IO), but there is an
effect on transactional consistency because the data being read from the cache might be out of sync with the
data in the database. Although in many applications some level of inconsistency is acceptable, that's not true for
all workloads. You should fully understand any application requirements before you implement an application-
tier caching strategy.
Get configuration and design tips
If you use Azure SQL Database, you can execute an open-source T-SQL script for improving database
configuration and design in Azure SQL DB. The script will analyze your database on demand and provide tips to
improve database performance and health. Some tips suggest configuration and operational changes based on
best practices, while other tips recommend design changes suitable for your workload, such as enabling
advanced database engine features.
To learn more about the script and get started, visit the Azure SQL Tips wiki page.

Next steps
Learn about the DTU-based purchasing model
Learn more about the vCore-based purchasing model
Read What is an Azure elastic pool?
Discover When to consider an elastic pool
Read about Monitoring Microsoft Azure SQL Database and Azure SQL Managed Instance performance using
dynamic management views
Learn to Diagnose and troubleshoot high CPU on Azure SQL Database
Tune nonclustered indexes with missing index suggestions
Video: Data Loading Best Practices on Azure SQL Database
Monitoring Microsoft Azure SQL Database and
Azure SQL Managed Instance performance using
dynamic management views
7/12/2022 • 24 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Microsoft Azure SQL Database and Azure SQL Managed Instance enable a subset of dynamic management
views to diagnose performance problems, which might be caused by blocked or long-running queries, resource
bottlenecks, poor query plans, and so on. This article provides information on how to detect common
performance problems by using dynamic management views.
Microsoft Azure SQL Database and Azure SQL Managed Instance partially support three categories of dynamic
management views:
Database-related dynamic management views.
Execution-related dynamic management views.
Transaction-related dynamic management views.
For detailed information on dynamic management views, see Dynamic Management Views and Functions
(Transact-SQL).

Monitor with SQL Insights (preview)


Azure Monitor SQL Insights (preview) is a tool for monitoring Azure SQL managed instances, databases in Azure
SQL Database, and SQL Server instances in Azure SQL VMs. This service uses a remote agent to capture data
from dynamic management views (DMVs) and routes the data to Azure Log Analytics, where it can be
monitored and analyzed. You can view this data from Azure Monitor in provided views, or access the Log data
directly to run queries and analyze trends. To start using Azure Monitor SQL Insights (preview), see Enable SQL
Insights (preview).

Permissions
In Azure SQL Database, querying a dynamic management view requires VIEW DATABASE STATE permissions.
The VIEW DATABASE STATE permission returns information about all objects within the current database. To
grant the VIEW DATABASE STATE permission to a specific database user, run the following query:

GRANT VIEW DATABASE STATE TO database_user;

In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE
permissions. For more information, see System Dynamic Management Views.
In an instance of SQL Server and in Azure SQL Managed Instance, dynamic management views return server
state information. In Azure SQL Database, they return information regarding your current logical database only.
This article contains a collection of DMV queries that you can execute using SQL Server Management Studio or
Azure Data Studio to detect the following types of query performance issues:
Identifying queries related to excessive CPU consumption
PAGELATCH_* and WRITE_LOG waits related to IO bottlenecks
PAGELATCH_* waits caused bytTempDB contention
RESOURCE_SEMAHPORE waits caused by memory grant waiting issues
Identifying database and object sizes
Retrieving information about active sessions
Retrieve system-wide and database resource usage information
Retrieving query performance information

Identify CPU performance issues


If CPU consumption is above 80% for extended periods of time, consider the following troubleshooting steps:
The CPU issue is occurring now
If issue is occurring right now, there are two possible scenarios:
Many individual queries that cumulatively consume high CPU
Use the following query to identify top query hashes:

PRINT '-- top 10 Active CPU Consuming Queries (aggregated)--';


SELECT TOP 10 GETDATE() runtime, *
FROM (SELECT query_stats.query_hash, SUM(query_stats.cpu_time) 'Total_Request_Cpu_Time_Ms',
SUM(logical_reads) 'Total_Request_Logical_Reads', MIN(start_time) 'Earliest_Request_start_Time', COUNT(*)
'Number_Of_Requests', SUBSTRING(REPLACE(REPLACE(MIN(query_stats.statement_text), CHAR(10), ' '), CHAR(13), '
'), 1, 256) AS "Statement_Text"
FROM (SELECT req.*, SUBSTRING(ST.text, (req.statement_start_offset / 2)+1, ((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(ST.text)ELSE req.statement_end_offset END-req.statement_start_offset)/ 2)+1) AS
statement_text
FROM sys.dm_exec_requests AS req
CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) AS ST ) AS query_stats
GROUP BY query_hash) AS t
ORDER BY Total_Request_Cpu_Time_Ms DESC;

Long running queries that consume CPU are still running


Use the following query to identify these queries:

PRINT '--top 10 Active CPU Consuming Queries by sessions--';


SELECT TOP 10 req.session_id, req.start_time, cpu_time 'cpu_time_ms', OBJECT_NAME(ST.objectid, ST.dbid)
'ObjectName', SUBSTRING(REPLACE(REPLACE(SUBSTRING(ST.text, (req.statement_start_offset / 2)+1, ((CASE
statement_end_offset WHEN -1 THEN DATALENGTH(ST.text)ELSE req.statement_end_offset END-
req.statement_start_offset)/ 2)+1), CHAR(10), ' '), CHAR(13), ' '), 1, 512) AS statement_text
FROM sys.dm_exec_requests AS req
CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) AS ST
ORDER BY cpu_time DESC;
GO

The CPU issue occurred in the past


If the issue occurred in the past and you want to do root cause analysis, use Query Store. Users with database
access can use T-SQL to query Query Store data. Query Store default configurations use a granularity of 1 hour.
Use the following query to look at activity for high CPU consuming queries. This query returns the top 15 CPU
consuming queries. Remember to change rsi.start_time >= DATEADD(hour, -2, GETUTCDATE() :
-- Top 15 CPU consuming queries by query hash
-- note that a query hash can have many query id if not parameterized or not parameterized properly
-- it grabs a sample query text by min
WITH AggregatedCPU AS (SELECT q.query_hash, SUM(count_executions * avg_cpu_time / 1000.0) AS
total_cpu_millisec, SUM(count_executions * avg_cpu_time / 1000.0)/ SUM(count_executions) AS
avg_cpu_millisec, MAX(rs.max_cpu_time / 1000.00) AS max_cpu_millisec, MAX(max_logical_io_reads)
max_logical_reads, COUNT(DISTINCT p.plan_id) AS number_of_distinct_plans, COUNT(DISTINCT p.query_id) AS
number_of_distinct_query_ids, SUM(CASE WHEN rs.execution_type_desc='Aborted' THEN count_executions ELSE 0
END) AS Aborted_Execution_Count, SUM(CASE WHEN rs.execution_type_desc='Regular' THEN count_executions ELSE 0
END) AS Regular_Execution_Count, SUM(CASE WHEN rs.execution_type_desc='Exception' THEN count_executions ELSE
0 END) AS Exception_Execution_Count, SUM(count_executions) AS total_executions, MIN(qt.query_sql_text) AS
sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q ON qt.query_text_id=q.query_text_id
JOIN sys.query_store_plan AS p ON q.query_id=p.query_id
JOIN sys.query_store_runtime_stats AS rs ON rs.plan_id=p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi ON
rsi.runtime_stats_interval_id=rs.runtime_stats_interval_id
WHERE rs.execution_type_desc IN ('Regular', 'Aborted', 'Exception')AND
rsi.start_time>=DATEADD(HOUR, -2, GETUTCDATE())
GROUP BY q.query_hash), OrderedCPU AS (SELECT query_hash, total_cpu_millisec,
avg_cpu_millisec, max_cpu_millisec, max_logical_reads, number_of_distinct_plans,
number_of_distinct_query_ids, total_executions, Aborted_Execution_Count, Regular_Execution_Count,
Exception_Execution_Count, sampled_query_text, ROW_NUMBER() OVER (ORDER BY total_cpu_millisec DESC,
query_hash ASC) AS RN
FROM AggregatedCPU)
SELECT OD.query_hash, OD.total_cpu_millisec, OD.avg_cpu_millisec, OD.max_cpu_millisec, OD.max_logical_reads,
OD.number_of_distinct_plans, OD.number_of_distinct_query_ids, OD.total_executions,
OD.Aborted_Execution_Count, OD.Regular_Execution_Count, OD.Exception_Execution_Count, OD.sampled_query_text,
OD.RN
FROM OrderedCPU AS OD
WHERE OD.RN<=15
ORDER BY total_cpu_millisec DESC;

Once you identify the problematic queries, it's time to tune those queries to reduce CPU utilization. If you don't
have time to tune the queries, you may also choose to upgrade the SLO of the database to work around the
issue.
For Azure SQL Database users, learn more about handling CPU performance problems in Diagnose and
troubleshoot high CPU on Azure SQL Database

Identify IO performance issues


When identifying IO performance issues, the top wait types associated with IO issues are:
PAGEIOLATCH_*

For data file IO issues (including PAGEIOLATCH_SH , PAGEIOLATCH_EX , PAGEIOLATCH_UP ). If the wait type name
has IO in it, it points to an IO issue. If there is no IO in the page latch wait name, it points to a different
type of problem (for example, tempdb contention).
WRITE_LOG

For transaction log IO issues.


If the IO issue is occurring right now
Use the sys.dm_exec_requests or sys.dm_os_waiting_tasks to see the wait_type and wait_time .
Identify data and log IO usage
Use the following query to identify data and log IO usage. If the data or log IO is above 80%, it means users
have used the available IO for the SQL Database service tier.
SELECT end_time, avg_data_io_percent, avg_log_write_percent
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;

If the IO limit has been reached, you have two options:


Option 1: Upgrade the compute size or service tier
Option 2: Identify and tune the queries consuming the most IO.
View buffer-related IO using the Query Store
For option 2, you can use the following query against Query Store for buffer-related IO to view the last two
hours of tracked activity:

-- top queries that waited on buffer


-- note these are finished queries
WITH Aggregated AS (SELECT q.query_hash, SUM(total_query_wait_time_ms) total_wait_time_ms,
SUM(total_query_wait_time_ms / avg_query_wait_time_ms) AS total_executions, MIN(qt.query_sql_text) AS
sampled_query_text, MIN(wait_category_desc) AS wait_category_desc
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q ON qt.query_text_id=q.query_text_id
JOIN sys.query_store_plan AS p ON q.query_id=p.query_id
JOIN sys.query_store_wait_stats AS waits ON waits.plan_id=p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi ON
rsi.runtime_stats_interval_id=waits.runtime_stats_interval_id
WHERE wait_category_desc='Buffer IO' AND rsi.start_time>=DATEADD(HOUR, -2, GETUTCDATE())
GROUP BY q.query_hash), Ordered AS (SELECT query_hash, total_executions,
total_wait_time_ms, sampled_query_text, wait_category_desc, ROW_NUMBER() OVER (ORDER BY total_wait_time_ms
DESC, query_hash ASC) AS RN
FROM Aggregated)
SELECT OD.query_hash, OD.total_executions, OD.total_wait_time_ms, OD.sampled_query_text,
OD.wait_category_desc, OD.RN
FROM Ordered AS OD
WHERE OD.RN<=15
ORDER BY total_wait_time_ms DESC;
GO

View total log IO for WRITELOG waits


If the wait type is WRITELOG , use the following query to view total log IO by statement:

-- Top transaction log consumers


-- Adjust the time window by changing
-- rsi.start_time >= DATEADD(hour, -2, GETUTCDATE())
WITH AggregatedLogUsed
AS (SELECT q.query_hash,
SUM(count_executions * avg_cpu_time / 1000.0) AS total_cpu_millisec,
SUM(count_executions * avg_cpu_time / 1000.0) / SUM(count_executions) AS avg_cpu_millisec,
SUM(count_executions * avg_log_bytes_used) AS total_log_bytes_used,
MAX(rs.max_cpu_time / 1000.00) AS max_cpu_millisec,
MAX(max_logical_io_reads) max_logical_reads,
COUNT(DISTINCT p.plan_id) AS number_of_distinct_plans,
COUNT(DISTINCT p.query_id) AS number_of_distinct_query_ids,
SUM( CASE
WHEN rs.execution_type_desc = 'Aborted' THEN
count_executions
ELSE
0
END
) AS Aborted_Execution_Count,
SUM( CASE
WHEN rs.execution_type_desc = 'Regular' THEN
count_executions
ELSE
0
0
END
) AS Regular_Execution_Count,
SUM( CASE
WHEN rs.execution_type_desc = 'Exception' THEN
count_executions
ELSE
0
END
) AS Exception_Execution_Count,
SUM(count_executions) AS total_executions,
MIN(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q
ON qt.query_text_id = q.query_text_id
JOIN sys.query_store_plan AS p
ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats AS rs
ON rs.plan_id = p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi
ON rsi.runtime_stats_interval_id = rs.runtime_stats_interval_id
WHERE rs.execution_type_desc IN ( 'Regular', 'Aborted', 'Exception' )
AND rsi.start_time >= DATEADD(HOUR, -2, GETUTCDATE())
GROUP BY q.query_hash),
OrderedLogUsed
AS (SELECT query_hash,
total_log_bytes_used,
number_of_distinct_plans,
number_of_distinct_query_ids,
total_executions,
Aborted_Execution_Count,
Regular_Execution_Count,
Exception_Execution_Count,
sampled_query_text,
ROW_NUMBER() OVER (ORDER BY total_log_bytes_used DESC, query_hash ASC) AS RN
FROM AggregatedLogUsed)
SELECT OD.total_log_bytes_used,
OD.number_of_distinct_plans,
OD.number_of_distinct_query_ids,
OD.total_executions,
OD.Aborted_Execution_Count,
OD.Regular_Execution_Count,
OD.Exception_Execution_Count,
OD.sampled_query_text,
OD.RN
FROM OrderedLogUsed AS OD
WHERE OD.RN <= 15
ORDER BY total_log_bytes_used DESC;
GO

Identify tempdb performance issues


When identifying IO performance issues, the top wait types associated with tempdb issues is PAGELATCH_* (not
PAGEIOLATCH_* ). However, PAGELATCH_* waits do not always mean you have tempdb contention. This wait may
also mean that you have user-object data page contention due to concurrent requests targeting the same data
page. To further confirm tempdb contention, use sys.dm_exec_requests to confirm that the wait_resource value
begins with 2:x:y where 2 is tempdb is the database ID, x is the file ID, and y is the page ID.
For tempdb contention, a common method is to reduce or rewrite application code that relies on tempdb .
Common tempdb usage areas include:
Temp tables
Table variables
Table-valued parameters
Version store usage (associated with long running transactions)
Queries that have query plans that use sorts, hash joins, and spools
Top queries that use table variables and temporary tables
Use the following query to identify top queries that use table variables and temporary tables:

SELECT plan_handle, execution_count, query_plan


INTO #tmpPlan
FROM sys.dm_exec_query_stats
CROSS APPLY sys.dm_exec_query_plan(plan_handle);
GO

WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/2004/07/showplan' AS sp)


SELECT plan_handle, stmt.stmt_details.value('@Database', 'varchar(max)') 'Database',
stmt.stmt_details.value('@Schema', 'varchar(max)') 'Schema', stmt.stmt_details.value('@Table',
'varchar(max)') 'table'
INTO #tmp2
FROM(SELECT CAST(query_plan AS XML) sqlplan, plan_handle FROM #tmpPlan) AS p
CROSS APPLY sqlplan.nodes('//sp:Object') AS stmt(stmt_details);
GO

SELECT t.plan_handle, [Database], [Schema], [table], execution_count


FROM(SELECT DISTINCT plan_handle, [Database], [Schema], [table]
FROM #tmp2
WHERE [table] LIKE '%@%' OR [table] LIKE '%#%') AS t
JOIN #tmpPlan AS t2 ON t.plan_handle=t2.plan_handle;

Identify long running transactions


Use the following query to identify long running transactions. Long running transactions prevent version store
cleanup.
SELECT DB_NAME(dtr.database_id) 'database_name',
sess.session_id,
atr.name AS 'tran_name',
atr.transaction_id,
transaction_type,
transaction_begin_time,
database_transaction_begin_time,
transaction_state,
is_user_transaction,
sess.open_transaction_count,
LTRIM(RTRIM(REPLACE(
REPLACE(
SUBSTRING(
SUBSTRING(
txt.text,
(req.statement_start_offset / 2) + 1,
((CASE req.statement_end_offset
WHEN -1 THEN
DATALENGTH(txt.text)
ELSE
req.statement_end_offset
END - req.statement_start_offset
) / 2
) + 1
),
1,
1000
),
CHAR(10),
' '
),
CHAR(13),
' '
)
)
) Running_stmt_text,
recenttxt.text 'MostRecentSQLText'
FROM sys.dm_tran_active_transactions AS atr
INNER JOIN sys.dm_tran_database_transactions AS dtr
ON dtr.transaction_id = atr.transaction_id
LEFT JOIN sys.dm_tran_session_transactions AS sess
ON sess.transaction_id = atr.transaction_id
LEFT JOIN sys.dm_exec_requests AS req
ON req.session_id = sess.session_id
AND req.transaction_id = sess.transaction_id
LEFT JOIN sys.dm_exec_connections AS conn
ON sess.session_id = conn.session_id
OUTER APPLY sys.dm_exec_sql_text(req.sql_handle) AS txt
OUTER APPLY sys.dm_exec_sql_text(conn.most_recent_sql_handle) AS recenttxt
WHERE atr.transaction_type != 2
AND sess.session_id != @@spid
ORDER BY start_time ASC;

Identify memory grant wait performance issues


If your top wait type is RESOURCE_SEMAHPORE and you don't have a high CPU usage issue, you may have a
memory grant waiting issue.
Determine if a RESOURCE_SEMAHPORE wait is a top wait
Use the following query to determine if a RESOURCE_SEMAHPORE wait is a top wait
SELECT wait_type,
SUM(wait_time) AS total_wait_time_ms
FROM sys.dm_exec_requests AS req
JOIN sys.dm_exec_sessions AS sess
ON req.session_id = sess.session_id
WHERE is_user_process = 1
GROUP BY wait_type
ORDER BY SUM(wait_time) DESC;

Identify high memory-consuming statements


Use the following query to identify high memory-consuming statements:

SELECT IDENTITY(INT, 1, 1) rowId,


CAST(query_plan AS XML) query_plan,
p.query_id
INTO #tmp
FROM sys.query_store_plan AS p
JOIN sys.query_store_runtime_stats AS r
ON p.plan_id = r.plan_id
JOIN sys.query_store_runtime_stats_interval AS i
ON r.runtime_stats_interval_id = i.runtime_stats_interval_id
WHERE start_time > '2018-10-11 14:00:00.0000000'
AND end_time < '2018-10-17 20:00:00.0000000';
GO
;WITH cte
AS (SELECT query_id,
query_plan,
m.c.value('@SerialDesiredMemory', 'INT') AS SerialDesiredMemory
FROM #tmp AS t
CROSS APPLY t.query_plan.nodes('//*:MemoryGrantInfo[@SerialDesiredMemory[. > 0]]') AS m(c) )
SELECT TOP 50
cte.query_id,
t.query_sql_text,
cte.query_plan,
CAST(SerialDesiredMemory / 1024. AS DECIMAL(10, 2)) SerialDesiredMemory_MB
FROM cte
JOIN sys.query_store_query AS q
ON cte.query_id = q.query_id
JOIN sys.query_store_query_text AS t
ON q.query_text_id = t.query_text_id
ORDER BY SerialDesiredMemory DESC;

If you encounter out of memory errors in Azure SQL Database, review sys.dm_os_out_of_memory_events.
Identify the top 10 active memory grants
Use the following query to identify the top 10 active memory grants:

SELECT TOP 10
CONVERT(VARCHAR(30), GETDATE(), 121) AS runtime,
r.session_id,
r.blocking_session_id,
r.cpu_time,
r.total_elapsed_time,
r.reads,
r.writes,
r.logical_reads,
r.row_count,
wait_time,
wait_type,
r.command,
OBJECT_NAME(txt.objectid, txt.dbid) 'Object_Name',
LTRIM(RTRIM(REPLACE(
REPLACE(
SUBSTRING(
SUBSTRING(
text,
(r.statement_start_offset / 2) + 1,
((CASE r.statement_end_offset
WHEN -1 THEN
DATALENGTH(text)
ELSE
r.statement_end_offset
END - r.statement_start_offset
) / 2
) + 1
),
1,
1000
),
CHAR(10),
' '
),
CHAR(13),
' '
)
)
) stmt_text,
mg.dop, --Degree of parallelism
mg.request_time, --Date and time when this query requested the
memory grant.
mg.grant_time, --NULL means memory has not been granted
mg.requested_memory_kb / 1024.0 requested_memory_mb, --Total requested amount of memory in megabytes
mg.granted_memory_kb / 1024.0 AS granted_memory_mb, --Total amount of memory actually granted in
megabytes. NULL if not granted
mg.required_memory_kb / 1024.0 AS required_memory_mb, --Minimum memory required to run this query in
megabytes.
max_used_memory_kb / 1024.0 AS max_used_memory_mb,
mg.query_cost, --Estimated query cost.
mg.timeout_sec, --Time-out in seconds before this query gives
up the memory grant request.
mg.resource_semaphore_id, --Non-unique ID of the resource semaphore on
which this query is waiting.
mg.wait_time_ms, --Wait time in milliseconds. NULL if the memory
is already granted.
CASE mg.is_next_candidate --Is this process the next candidate for a memory grant
WHEN 1 THEN
'Yes'
WHEN 0 THEN
'No'
ELSE
'Memory has been granted'
END AS 'Next Candidate for Memory Grant',
qp.query_plan
FROM sys.dm_exec_requests AS r
JOIN sys.dm_exec_query_memory_grants AS mg
ON r.session_id = mg.session_id
AND r.request_id = mg.request_id
CROSS APPLY sys.dm_exec_sql_text(mg.sql_handle) AS txt
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
ORDER BY mg.granted_memory_kb DESC;

Calculating database and objects sizes


The following query returns the size of your database (in megabytes):
-- Calculates the size of the database.
SELECT SUM(CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8192.) / 1024 / 1024 AS DatabaseSizeInMB
FROM sys.database_files
WHERE type_desc = 'ROWS';
GO

The following query returns the size of individual objects (in megabytes) in your database:

-- Calculates the size of individual database objects.


SELECT sys.objects.name, SUM(reserved_page_count) * 8.0 / 1024
FROM sys.dm_db_partition_stats, sys.objects
WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id
GROUP BY sys.objects.name;
GO

Monitoring connections
You can use the sys.dm_exec_connections view to retrieve information about the connections established to a
specific server and managed instance and the details of each connection. In addition, the sys.dm_exec_sessions
view is helpful when retrieving information about all active user connections and internal tasks.
The following query retrieves information on the current connection:

SELECT
c.session_id, c.net_transport, c.encrypt_option,
c.auth_scheme, s.host_name, s.program_name,
s.client_interface_name, s.login_name, s.nt_domain,
s.nt_user_name, s.original_login_name, c.connect_time,
s.login_time
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
WHERE c.session_id = @@SPID;

NOTE
When executing the sys.dm_exec_requests and sys.dm_exec_sessions views , if you have VIEW DATABASE STATE
permission on the database, you see all executing sessions on the database; otherwise, you see only the current session.

Monitor resource use


You can monitor Azure SQL Database resource usage using SQL Database Query Performance Insight. For Azure
SQL Database and Azure SQL Managed Instance, you can monitor using Query Store.
You can also monitor usage using these views:
Azure SQL Database: sys.dm_db_resource_stats
Azure SQL Managed Instance: sys.server_resource_stats
Both Azure SQL Database and Azure SQL Managed Instance: sys.resource_stats
sys.dm_db_resource_stats
You can use the sys.dm_db_resource_stats view in every database. The sys.dm_db_resource_stats view shows
recent resource use data relative to the service tier. Average percentages for CPU, data IO, log writes, and
memory are recorded every 15 seconds and are maintained for 1 hour.
Because this view provides a more granular look at resource use, use sys.dm_db_resource_stats first for any
current-state analysis or troubleshooting. For example, this query shows the average and maximum resource
use for the current database over the past hour:

SELECT
AVG(avg_cpu_percent) AS 'Average CPU use in percent',
MAX(avg_cpu_percent) AS 'Maximum CPU use in percent',
AVG(avg_data_io_percent) AS 'Average data IO in percent',
MAX(avg_data_io_percent) AS 'Maximum data IO in percent',
AVG(avg_log_write_percent) AS 'Average log write use in percent',
MAX(avg_log_write_percent) AS 'Maximum log write use in percent',
AVG(avg_memory_usage_percent) AS 'Average memory use in percent',
MAX(avg_memory_usage_percent) AS 'Maximum memory use in percent'
FROM sys.dm_db_resource_stats;

For other queries, see the examples in sys.dm_db_resource_stats.


sys.server_resource_stats
You can use sys.server_resource_stats to return CPU usage, IO, and storage data for an Azure SQL Managed
Instance. The data is collected and aggregated within five-minute intervals. There is one row for every 15
seconds reporting. The data returned includes CPU usage, storage size, IO utilization, and managed instance
SKU. Historical data is retained for approximately 14 days.

DECLARE @s datetime;
DECLARE @e datetime;
SET @s= DateAdd(d,-7,GetUTCDate());
SET @e= GETUTCDATE();
SELECT resource_name, AVG(avg_cpu_percent) AS Average_Compute_Utilization
FROM sys.server_resource_stats
WHERE start_time BETWEEN @s AND @e
GROUP BY resource_name
HAVING AVG(avg_cpu_percent) >= 80;

sys.resource_stats
The sys.resource_stats view in the master database has additional information that can help you monitor the
performance of your database at its specific service tier and compute size. The data is collected every 5 minutes
and is maintained for approximately 14 days. This view is useful for a longer-term historical analysis of how
your database uses resources.
The following graph shows the CPU resource use for a Premium database with the P2 compute size for each
hour in a week. This graph starts on a Monday, shows five work days, and then shows a weekend, when much
less happens on the application.

From the data, this database currently has a peak CPU load of just over 50 percent CPU use relative to the P2
compute size (midday on Tuesday). If CPU is the dominant factor in the application's resource profile, then you
might decide that P2 is the right compute size to guarantee that the workload always fits. If you expect an
application to grow over time, it's a good idea to have an extra resource buffer so that the application doesn't
ever reach the performance-level limit. If you increase the compute size, you can help avoid customer-visible
errors that might occur when a database doesn't have enough power to process requests effectively, especially
in latency-sensitive environments. An example is a database that supports an application that paints webpages
based on the results of database calls.
Other application types might interpret the same graph differently. For example, if an application tries to process
payroll data each day and has the same chart, this kind of "batch job" model might do fine at a P1 compute size.
The P1 compute size has 100 DTUs compared to 200 DTUs at the P2 compute size. The P1 compute size
provides half the performance of the P2 compute size. So, 50 percent of CPU use in P2 equals 100 percent CPU
use in P1. If the application does not have timeouts, it might not matter if a job takes 2 hours or 2.5 hours to
finish, if it gets done today. An application in this category probably can use a P1 compute size. You can take
advantage of the fact that there are periods of time during the day when resource use is lower, so that any "big
peak" might spill over into one of the troughs later in the day. The P1 compute size might be good for that kind
of application (and save money), as long as the jobs can finish on time each day.
The database engine exposes consumed resource information for each active database in the
sys.resource_stats view of the master database in each server. The data in the table is aggregated for 5-
minute intervals. With the Basic, Standard, and Premium service tiers, the data can take more than 5 minutes to
appear in the table, so this data is more useful for historical analysis rather than near-real-time analysis. Query
the sys.resource_stats view to see the recent history of a database and to validate whether the reservation you
chose delivered the performance you want when needed.

NOTE
On Azure SQL Database, you must be connected to the master database to query sys.resource_stats in the
following examples.

This example shows you how the data in this view is exposed:

SELECT TOP 10 *
FROM sys.resource_stats
WHERE database_name = 'resource1'
ORDER BY start_time DESC;

The next example shows you different ways that you can use the sys.resource_stats catalog view to get
information about how your database uses resources:
1. To look at the past week's resource use for the database userdb1, you can run this query:

SELECT *
FROM sys.resource_stats
WHERE database_name = 'userdb1' AND
start_time > DATEADD(day, -7, GETDATE())
ORDER BY start_time DESC;

2. To evaluate how well your workload fits the compute size, you need to drill down into each aspect of the
resource metrics: CPU, reads, writes, number of workers, and number of sessions. Here's a revised query
using sys.resource_stats to report the average and maximum values of these resource metrics:

SELECT
avg(avg_cpu_percent) AS 'Average CPU use in percent',
max(avg_cpu_percent) AS 'Maximum CPU use in percent',
avg(avg_data_io_percent) AS 'Average physical data IO use in percent',
max(avg_data_io_percent) AS 'Maximum physical data IO use in percent',
avg(avg_log_write_percent) AS 'Average log write use in percent',
max(avg_log_write_percent) AS 'Maximum log write use in percent',
avg(max_session_percent) AS 'Average % of sessions',
max(max_session_percent) AS 'Maximum % of sessions',
avg(max_worker_percent) AS 'Average % of workers',
max(max_worker_percent) AS 'Maximum % of workers'
FROM sys.resource_stats
WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE());

3. With this information about the average and maximum values of each resource metric, you can assess
how well your workload fits into the compute size you chose. Usually, average values from
sys.resource_stats give you a good baseline to use against the target size. It should be your primary
measurement stick. For an example, you might be using the Standard service tier with S2 compute size.
The average use percentages for CPU and IO reads and writes are below 40 percent, the average number
of workers is below 50, and the average number of sessions is below 200. Your workload might fit into
the S1 compute size. It's easy to see whether your database fits in the worker and session limits. To see
whether a database fits into a lower compute size with regard to CPU, reads, and writes, divide the DTU
number of the lower compute size by the DTU number of your current compute size, and then multiply
the result by 100:
S1 DTU / S2 DTU * 100 = 20 / 50 * 100 = 40

The result is the relative performance difference between the two compute sizes in percentage. If your
resource use doesn't exceed this amount, your workload might fit into the lower compute size. However,
you need to look at all ranges of resource use values, and determine, by percentage, how often your
database workload would fit into the lower compute size. The following query outputs the fit percentage
per resource dimension, based on the threshold of 40 percent that we calculated in this example:

SELECT
100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 40 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'CPU Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 40 THEN 1 ELSE 0 END) * 1.0)
/ COUNT(database_name)) AS 'Log Write Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 40 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'Physical Data IO Fit Percent'
FROM sys.resource_stats
WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());

Based on your database service tier, you can decide whether your workload fits into the lower compute
size. If your database workload objective is 99.9 percent and the preceding query returns values greater
than 99.9 percent for all three resource dimensions, your workload likely fits into the lower compute size.
Looking at the fit percentage also gives you insight into whether you should move to the next higher
compute size to meet your objective. For example, userdb1 shows the following CPU use for the past
week:

AVERA GE C P U P ERC EN T M A XIM UM C P U P ERC EN T

24.5 100.00
The average CPU is about a quarter of the limit of the compute size, which would fit well into the
compute size of the database. But, the maximum value shows that the database reaches the limit of the
compute size. Do you need to move to the next higher compute size? Look at how many times your
workload reaches 100 percent, and then compare it to your database workload objective.

SELECT
100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 100 THEN 1 ELSE 0 END) * 1.0) /
COUNT(database_name)) AS 'CPU Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 100 THEN 1 ELSE 0 END) *
1.0) / COUNT(database_name)) AS 'Log Write Fit Percent',
100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 100 THEN 1 ELSE 0 END) * 1.0)
/ COUNT(database_name)) AS 'Physical Data IO Fit Percent'
FROM sys.resource_stats
WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());

If this query returns a value less than 99.9 percent for any of the three resource dimensions, consider
either moving to the next higher compute size or use application-tuning techniques to reduce the load on
the database.
4. This exercise also considers your projected workload increase in the future.
For elastic pools, you can monitor individual databases in the pool with the techniques described in this section.
But you can also monitor the pool as a whole. For information, see Monitor and manage an elastic pool.
Maximum concurrent requests
To see the number of concurrent requests, run this Transact-SQL query on your database:

SELECT COUNT(*) AS [Concurrent_Requests]


FROM sys.dm_exec_requests R;

To analyze the workload of a SQL Server database, modify this query to filter on the specific database you want
to analyze. For example, if you have an on-premises database named MyDatabase, this Transact-SQL query
returns the count of concurrent requests in that database:

SELECT COUNT(*) AS [Concurrent_Requests]


FROM sys.dm_exec_requests R
INNER JOIN sys.databases D ON D.database_id = R.database_id
AND D.name = 'MyDatabase';

This is just a snapshot at a single point in time. To get a better understanding of your workload and concurrent
request requirements, you'll need to collect many samples over time.
Maximum concurrent logins
You can analyze your user and application patterns to get an idea of the frequency of logins. You also can run
real-world loads in a test environment to make sure that you're not hitting this or other limits we discuss in this
article. There isn't a single query or dynamic management view (DMV) that can show you concurrent login
counts or history.
If multiple clients use the same connection string, the service authenticates each login. If 10 users
simultaneously connect to a database by using the same username and password, there would be 10 concurrent
logins. This limit applies only to the duration of the login and authentication. If the same 10 users connect to the
database sequentially, the number of concurrent logins would never be greater than 1.
NOTE
Currently, this limit does not apply to databases in elastic pools.

Maximum sessions
To see the number of current active sessions, run this Transact-SQL query on your database:

SELECT COUNT(*) AS [Sessions]


FROM sys.dm_exec_connections;

If you're analyzing a SQL Server workload, modify the query to focus on a specific database. This query helps
you determine possible session needs for the database if you are considering moving it to Azure.

SELECT COUNT(*) AS [Sessions]


FROM sys.dm_exec_connections C
INNER JOIN sys.dm_exec_sessions S ON (S.session_id = C.session_id)
INNER JOIN sys.databases D ON (D.database_id = S.database_id)
WHERE D.name = 'MyDatabase';

Again, these queries return a point-in-time count. If you collect multiple samples over time, you'll have the best
understanding of your session use.
You can get historical statistics on sessions by querying the sys.resource_stats view and reviewing the
active_session_count column.

Monitoring query performance


Slow or long running queries can consume significant system resources. This section demonstrates how to use
dynamic management views to detect a few common query performance problems.
Finding top N queries
The following example returns information about the top five queries ranked by average CPU time. This example
aggregates the queries according to their query hash, so that logically equivalent queries are grouped by their
cumulative resource consumption.

SELECT TOP 5 query_stats.query_hash AS "Query Hash",


SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
MIN(query_stats.statement_text) AS "Statement Text"
FROM
(SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset
WHEN -1 THEN DATALENGTH(ST.text)
ELSE QS.statement_end_offset END
- QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;

Monitoring blocked queries


Slow or long-running queries can contribute to excessive resource consumption and be the consequence of
blocked queries. The cause of the blocking can be poor application design, bad query plans, the lack of useful
indexes, and so on. You can use the sys.dm_tran_locks view to get information about the current locking activity
in database. For example code, see sys.dm_tran_locks (Transact-SQL). For more information on troubleshooting
blocking, see Understand and resolve Azure SQL blocking problems.
Monitoring deadlocks
In some cases, two or more queries may mutually block one another, resulting in a deadlock.
You can create an Extended Events trace a database in Azure SQL Database to capture deadlock events, then find
related queries and their execution plans in Query Store. Learn more in Analyze and prevent deadlocks in Azure
SQL Database.
For Azure SQL Managed Instance, refer to the Deadlocks of the Transaction locking and row versioning guide.
Monitoring query plans
An inefficient query plan also may increase CPU consumption. The following example uses the
sys.dm_exec_query_stats view to determine which query uses the most cumulative CPU.

SELECT
highest_cpu_queries.plan_handle,
highest_cpu_queries.total_worker_time,
q.dbid,
q.objectid,
q.number,
q.encrypted,
q.[text]
FROM
(SELECT TOP 50
qs.plan_handle,
qs.total_worker_time
FROM
sys.dm_exec_query_stats qs
ORDER BY qs.total_worker_time desc) AS highest_cpu_queries
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS q
ORDER BY highest_cpu_queries.total_worker_time DESC;

Next steps
Introduction to Azure SQL Database and Azure SQL Managed Instance
Diagnose and troubleshoot high CPU on Azure SQL Database
Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance
Understand and resolve Azure SQL Database blocking problems
Analyze and prevent deadlocks in Azure SQL Database
Configure streaming export of Azure SQL Database
and SQL Managed Instance diagnostic telemetry
7/12/2022 • 26 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In this article, you will learn about the performance metrics and resource logs for Azure SQL Database that you
can export to one of several destinations for analysis. You will learn how to configure the streaming export of
this diagnostic telemetry through the Azure portal, PowerShell, Azure CLI, the REST API, and Azure Resource
Manager templates.
You will also learn about the destinations to which you can stream this diagnostic telemetry and how to choose
among these choices. Your destination options include:
Log Analytics and SQL Analytics
Event Hubs
Azure Storage

Diagnostic telemetry for export


Most important among the diagnostic telemetry that you can export is the Intelligent Insights (SQLInsights) log
(unrelated to Azure Monitor SQL Insights (preview)). Intelligent Insights uses built-in intelligence to continuously
monitor database usage through artificial intelligence and detect disruptive events that cause poor performance.
Once detected, a detailed analysis is performed that generates a Intelligent Insights log with an intelligent
assessment of the issue. This assessment consists of a root cause analysis of the database performance issue
and, where possible, recommendations for performance improvements. You need to configure the streaming
export of this log to view its contents.
In addition to streaming the export of the Intelligent Insights log, you can also export a variety of performance
metrics and additional database logs. The following table describes the performance metrics and resources logs
that you can configure for streaming export to one of several destinations. This diagnostic telemetry can be
configured for single databases, elastic pools and pooled databases, and managed instances and instance
databases.

DIA GN O ST IC T EL EM ET RY F O R A Z URE SQ L M A N A GED IN STA N C E


DATA B A SES A Z URE SQ L DATA B A SE SUP P O RT SUP P O RT

Basic metrics: Contains DTU/CPU Yes No


percentage, DTU/CPU limit, physical
data read percentage, log write
percentage, Successful/Failed/Blocked
by firewall connections, sessions
percentage, workers percentage,
storage, storage percentage, and XTP
storage percentage.

Instance and App Advanced: Contains Yes No


tempdb system database data and log
file size and tempdb percent log file
used.
DIA GN O ST IC T EL EM ET RY F O R A Z URE SQ L M A N A GED IN STA N C E
DATA B A SES A Z URE SQ L DATA B A SE SUP P O RT SUP P O RT

QueryStoreRuntimeStatistics: Contains Yes Yes


information about the query runtime
statistics such as CPU usage and
query duration statistics.

QueryStoreWaitStatistics: Contains Yes Yes


information about the query wait
statistics (what your queries waited on)
such are CPU, LOG, and LOCKING.

Errors: Contains information about Yes Yes


SQL errors on a database.

DatabaseWaitStatistics: Contains Yes No


information about how much time a
database spent waiting on different
wait types.

Timeouts: Contains information about Yes No


timeouts on a database.

Blocks: Contains information about Yes No


blocking events on a database.

Deadlocks: Contains information about Yes No


deadlock events on a database.

AutomaticTuning: Contains information Yes No


about automatic tuning
recommendations for a database.

SQLInsights: Contains Intelligent Yes Yes


Insights into performance for a
database. To learn more, see Intelligent
Insights.

Workload Management: Available for No No


Azure Synapse only For more
information, see Azure Synapse
Analytics – Workload Management
Portal Monitoring

NOTE
Diagnostic settings cannot be configured for the system databases , such as master , msdb , model , resource and
tempdb databases.

Streaming export destinations


This diagnostic telemetry can be streamed to one of the following Azure resources for analysis.
Log Analytics workspace :
Data streamed to a Log Analytics workspace can be consumed by SQL Analytics. SQL Analytics is a cloud
only monitoring solution that provides intelligent monitoring of your databases that includes
performance reports, alerts, and mitigation recommendations. Data streamed to a Log Analytics
workspace can be analyzed with other monitoring data collected and also enables you to leverage other
Azure Monitor features such as alerts and visualizations
Azure Event Hubs :
Data streamed to an Azure Event Hubprovides the following functionality:
Stream logs to 3rd par ty logging and telemetr y systems : Stream all of your metrics and
resource logs to a single event hub to pipe log data to a third-party SIEM or log analytics tool.
Build a custom telemetr y and logging platform : The highly scalable publish-subscribe nature of
Azure Event Hubs allows you to flexibly ingest metrics and resource logs into a custom telemetry
platform. See Designing and Sizing a Global Scale Telemetry Platform on Azure Event Hubs for details.
View ser vice health by streaming data to Power BI : Use Event Hubs, Stream Analytics, and
Power BI to transform your diagnostics data into near real-time insights on your Azure services. See
Stream Analytics and Power BI: A real-time analytics dashboard for streaming data for details on this
solution.
Azure Storage :
Data streamed to Azure Storage enables you to archive vast amounts of diagnostic telemetry for a
fraction of the cost of the previous two streaming options.
This diagnostic telemetry streamed to one of these destinations can be used to gauge resource utilization and
query execution statistics for easier performance monitoring.

Enable and configure the streaming export of diagnostic telemetry


You can enable and manage metrics and diagnostic telemetry logging by using one of the following methods:
Azure portal
PowerShell
Azure CLI
Azure Monitor REST API
Azure Resource Manager template

NOTE
To enable audit log streaming of security telemetry, see Set up auditing for your database and auditing logs in Azure
Monitor logs and Azure Event Hubs.

Configure the streaming export of diagnostic telemetry


You can use the Diagnostics settings menu in the Azure portal to enable and configure streaming of
diagnostic telemetry. Additionally, you can use PowerShell, the Azure CLI, the REST API, and Resource Manager
templates to configure streaming of diagnostic telemetry. You can set the following destinations to stream the
diagnostic telemetry: Azure Storage, Azure Event Hubs, and Azure Monitor logs.

IMPORTANT
The streaming export of diagnostic telemetry is not enabled by default.

Select one of the following tabs for step-by-step guidance for configuring the streaming export of diagnostic
telemetry in the Azure portal and for scripts for accomplishing the same with PowerShell and the Azure CLI.

Azure portal
PowerShell
Azure CLI

Elastic pools in Azure SQL Database


You can set up an elastic pool resource to collect the following diagnostic telemetry:

RESO URC E M O N ITO RIN G T EL EM ET RY

Elastic pool Basic metrics contains eDTU/CPU percentage, eDTU/CPU


limit, physical data read percentage, log write percentage,
sessions percentage, workers percentage, storage, storage
percentage, storage limit, and XTP storage percentage.

To configure streaming of diagnostic telemetry for elastic pools and pooled databases, you need to separately
configure each separately:
Enable streaming of diagnostic telemetry for an elastic pool
Enable streaming of diagnostic telemetry for each database in elastic pool
The elastic pool container has its own telemetry separate from each individual pooled database's telemetry.
To enable streaming of diagnostic telemetry for an elastic pool resource, follow these steps:
1. Go to the elastic pool resource in Azure portal.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
4. Enter a setting name for your own reference.
5. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
6. For log analytics, select Configure and create a new workspace by selecting +Create New Workspace ,
or select an existing workspace.
7. Select the check box for elastic pool diagnostic telemetry: Basic metrics.

8. Select Save .
9. In addition, configure streaming of diagnostic telemetry for each database within the elastic pool you
want to monitor by following steps described in the next section.
IMPORTANT
In addition to configuring diagnostic telemetry for an elastic pool, you also need to configure diagnostic telemetry for
each database in the elastic pool.

Databases in Azure SQL Database


You can set up a database resource to collect the following diagnostic telemetry:

RESO URC E M O N ITO RIN G T EL EM ET RY

Single or pooled database Basic metrics contains DTU percentage, DTU used, DTU limit,
CPU percentage, physical data read percentage, log write
percentage, Successful/Failed/Blocked by firewall connections,
sessions percentage, workers percentage, storage, storage
percentage, XTP storage percentage, and deadlocks.

To enable streaming of diagnostic telemetry for a single or a pooled database, follow these steps:
1. Go to Azure SQL database resource.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
You can create up to three parallel connections to stream diagnostic telemetry.
4. Select Add diagnostic setting to configure parallel streaming of diagnostics data to multiple resources.

5. Enter a setting name for your own reference.


6. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
7. For the standard, event-based monitoring experience, select the following check boxes for database
diagnostics log telemetry: SQLInsights , AutomaticTuning , Quer yStoreRuntimeStatistics ,
Quer yStoreWaitStatistics , Errors , DatabaseWaitStatistics , Timeouts , Blocks , and Deadlocks .
8. For an advanced, one-minute-based monitoring experience, select the check box for Basic metrics.
9. Select Save .
10. Repeat these steps for each database you want to monitor.

TIP
Repeat these steps for each single and pooled database you want to monitor.

Instances in Azure SQL Managed Instance


You can set up a managed instance resource to collect the following diagnostic telemetry:
RESO URC E M O N ITO RIN G T EL EM ET RY

Managed instance ResourceUsageStats contains vCores count, average CPU


percentage, IO requests, bytes read/written, reserved
storage space, and used storage space.

To configure streaming of diagnostic telemetry for managed instance and instance databases, you will need to
separately configure each:
Enable streaming of diagnostic telemetry for managed instance
Enable streaming of diagnostic telemetry for each instance database
The managed instance container has its own telemetry separate from each instance database's telemetry.
To enable streaming of diagnostic telemetry for a managed instance resource, follow these steps:
1. Go to the managed instance resource in Azure portal.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.

4. Enter a setting name for your own reference.


5. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
6. For log analytics, select Configure and create a new workspace by selecting +Create New Workspace ,
or use an existing workspace.
7. Select the check box for instance diagnostic telemetry: ResourceUsageStats .
8. Select Save .
9. In addition, configure streaming of diagnostic telemetry for each instance database within the managed
instance you want to monitor by following the steps described in the next section.

IMPORTANT
In addition to configuring diagnostic telemetry for a managed instance, you also need to configure diagnostic telemetry
for each instance database.

Databases in Azure SQL Managed Instance


You can set up an instance database resource to collect the following diagnostic telemetry:

RESO URC E M O N ITO RIN G T EL EM ET RY

Instance database ResourceUsageStats contains vCores count, average CPU


percentage, IO requests, bytes read/written, reserved
storage space, and used storage space.

To enable streaming of diagnostic telemetry for an instance database, follow these steps:
1. Go to instance database resource within managed instance.
2. Select Diagnostics settings .
3. Select Turn on diagnostics if no previous settings exist, or select Edit setting to edit a previous setting.
You can create up to three (3) parallel connections to stream diagnostic telemetry.
Select +Add diagnostic setting to configure parallel streaming of diagnostics data to multiple
resources.
4. Enter a setting name for your own reference.
5. Select a destination resource for the streaming diagnostics data: Archive to storage account , Stream
to an event hub , or Send to Log Analytics .
6. Select the check boxes for database diagnostic telemetry: SQLInsights , Quer yStoreRuntimeStatistics ,
Quer yStoreWaitStatistics , and Errors .

7. Select Save .
8. Repeat these steps for each instance database you want to monitor.
TIP
Repeat these steps for each instance database you want to monitor.

Stream into SQL Analytics


Azure SQL Database and Azure SQL Managed Instance metrics and resource logs that are streamed into a Log
Analytics workspace can be consumed by Azure SQL Analytics. Azure SQL Analytics is a cloud solution that
monitors the performance of single databases, elastic pools and pooled databases, and managed instances and
instance databases at scale and across multiple subscriptions. It can help you collect and visualize performance
metrics, and it has built-in intelligence for performance troubleshooting.

Installation overview
You can monitor a collection of databases and database collections with Azure SQL Analytics by performing the
following steps:
1. Create an Azure SQL Analytics solution from the Azure Marketplace.
2. Create a Log Analytics workspace in the solution.
3. Configure databases to stream diagnostic telemetry into the workspace.
You can configure the streaming export of this diagnostic telemetry by using the built-in Send to Log
Analytics option in the diagnostics settings tab in the Azure portal. You can also enable streaming into a Log
Analytics workspace by using diagnostics settings via PowerShell cmdlets, the Azure CLI, the Azure Monitor
REST API, or Resource Manager templates.
Create an Azure SQL Analytics resource
1. Search for Azure SQL Analytics in Azure Marketplace and select it.
2. Select Create on the solution's overview screen.
3. Fill in the Azure SQL Analytics form with the additional information that is required: workspace name,
subscription, resource group, location, and pricing tier.

4. Select OK to confirm, and then select Create .


Configure the resource to record metrics and resource logs
You need to separately configure diagnostic telemetry streaming for single and pooled databases, elastic pools,
managed instances, and instance databases. The easiest way to configure where a resource records its metrics is
by using the Azure portal. For detailed steps, see Configure the streaming export of diagnostic telemetry.
Use Azure SQL Analytics for monitoring and alerting
You can use SQL Analytics as a hierarchical dashboard to view your database resources.
To learn how to use Azure SQL Analytics, see Monitor by using SQL Analytics.
To learn how to set up alerts for in SQL Analytics, see Creating alerts for database, elastic pools, and
managed instances.

Stream into Event Hubs


You can stream Azure SQL Database and Azure SQL Managed Instance metrics and resource logs into Event
Hubs by using the built-in Stream to an event hub option in the Azure portal. You also can enable the Service
Bus rule ID by using diagnostics settings via PowerShell cmdlets, the Azure CLI, or the Azure Monitor REST API.
Be sure that the event hub is in the same region as your database and server.
What to do with metrics and resource logs in Event Hubs
After the selected data is streamed into Event Hubs, you're one step closer to enabling advanced monitoring
scenarios. Event Hubs acts as the front door for an event pipeline. After data is collected into an event hub, it can
be transformed and stored by using a real-time analytics provider or a storage adapter. Event Hubs decouples
the production of a stream of events from the consumption of those events. In this way, event consumers can
access the events on their own schedule. For more information on Event Hubs, see:
What are Azure Event Hubs?
Get started with Event Hubs
You can use streamed metrics in Event Hubs to:
View ser vice health by streaming hot-path data to Power BI
By using Event Hubs, Stream Analytics, and Power BI, you can easily transform your metrics and
diagnostics data into near real-time insights on your Azure services. For an overview of how to set up an
event hub, process data with Stream Analytics, and use Power BI as an output, see Stream Analytics and
Power BI.
Stream logs to third-par ty logging and telemetr y streams
By using Event Hubs streaming, you can get your metrics and resource logs into various third-party
monitoring and log analytics solutions.
Build a custom telemetr y and logging platform
Do you already have a custom-built telemetry platform or are considering building one? The highly
scalable publish-subscribe nature of Event Hubs allows you to flexibly ingest metrics and resource logs.
See Dan Rosanova's guide to using Event Hubs in a global-scale telemetry platform.

Stream into Azure Storage


You can store metrics and resource logs in Azure Storage by using the built-in Archive to a storage account
option in the Azure portal. You can also enable Storage by using diagnostics settings via PowerShell cmdlets, the
Azure CLI, or the Azure Monitor REST API.
Schema of metrics and resource logs in the storage account
After you set up metrics and resource logs collection, a storage container is created in the storage account you
selected when the first rows of data are available. The structure of the blobs is:
insights-{metrics|logs}-{category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/
RESOURCEGROUPS/{resource group name}/PROVIDERS/Microsoft.SQL/servers/{resource_server}/
databases/{database_name}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit numeric
day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json

Or, more simply:

insights-{metrics|logs}-{category name}/resourceId=/{resource Id}/y={four-digit numeric year}/m={two-digit


numeric month}/d={two-digit numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json

For example, a blob name for Basic metrics might be:

insights-metrics-minute/resourceId=/SUBSCRIPTIONS/s1id1234-5679-0123-4567-
890123456789/RESOURCEGROUPS/TESTRESOURCEGROUP/PROVIDERS/MICROSOFT.SQL/
servers/Server1/databases/database1/y=2016/m=08/d=22/h=18/m=00/PT1H.json

A blob name for storing data from an elastic pool looks like:

insights-{metrics|logs}-{category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/


RESOURCEGROUPS/{resource group name}/PROVIDERS/Microsoft.SQL/servers/{resource_server}/
elasticPools/{elastic_pool_name}/y={four-digit numeric year}/m={two-digit numeric month}/d={two-digit
numeric day}/h={two-digit 24-hour clock hour}/m=00/PT1H.json

Data retention policy and pricing


If you select Event Hubs or a Storage account, you can specify a retention policy. This policy deletes data that is
older than a selected time period. If you specify Log Analytics, the retention policy depends on the selected
pricing tier. In this case, the provided free units of data ingestion can enable free monitoring of several databases
each month. Any consumption of diagnostic telemetry in excess of the free units might incur costs.

IMPORTANT
Active databases with heavier workloads ingest more data than idle databases. For more information, see Log analytics
pricing.

If you are using Azure SQL Analytics, you can monitor your data ingestion consumption by selecting OMS
Workspace on the navigation menu of Azure SQL Analytics, and then selecting Usage and Estimated Costs .

Metrics and logs available


The monitoring telemetry available for single databases, pooled databases, elastic pools, managed instance, and
instance databases is documented in this section of the article. Collected monitoring telemetry inside SQL
Analytics can be used for your own custom analysis and application development using Azure Monitor log
queries language.
Basic metrics
Refer to the following tables for details about Basic metrics by resource.
NOTE
Basic metrics option was formerly known as All metrics. The change made was to the naming only and there was no
change to the metrics monitored. This change was initiated to allow for introduction of additional metric categories in the
future.

Basic metrics for elastic pools

RESO URC E M ET RIC S

Elastic pool eDTU percentage, eDTU used, eDTU limit, CPU percentage,
physical data read percentage, log write percentage, sessions
percentage, workers percentage, storage, storage
percentage, storage limit, XTP storage percentage

Basic metrics for single and pooled databases

RESO URC E M ET RIC S

Single and pooled database DTU percentage, DTU used, DTU limit, CPU percentage,
physical data read percentage, log write percentage,
Successful/Failed/Blocked by firewall connections, sessions
percentage, workers percentage, storage, storage
percentage, XTP storage percentage, and deadlocks

Advanced metrics
Refer to the following table for details about advanced metrics.

M ET RIC M ET RIC DISP L AY N A M E DESC RIP T IO N

sqlserver_process_core_percent1 SQL process core percent CPU usage percentage for the SQL
process, as measured by the operating
system.

sqlserver_process_memory_percent1 SQL process memory percent Memory usage percentage for the SQL
process, as measured by the operating
system.

tempdb_data_size2 Tempdb Data File Size Kilobytes Tempdb Data File Size Kilobytes.

tempdb_log_size2 Tempdb Log File Size Kilobytes Tempdb Log File Size Kilobytes.

tempdb_log_used_percent2 Tempdb Percent Log Used Tempdb Percent Log Used.

1 This metric is available for


databases using the vCore purchasing model with 2 vCores and higher, or 200 DTU
and higher for DTU-based purchasing models.
2 This metric is available for
databases using the vCore purchasing model with 2 vCores and higher, or 200 DTU
and higher for DTU-based purchasing models. This metric is not currently available for Synapse Analytics SQL
pools.

NOTE
Both Basic and Advanced metrics may be unavailable for databases that have been inactive for 7 days or longer.

Basic logs
Details of telemetry available for all logs are documented in the following tables. For more information, see
supported diagnostic telemetry.
Resource usage stats for managed instances

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: ResourceUsageStats

Resource Name of the resource

ResourceType Name of the resource type. Always: MANAGEDINSTANCES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the managed instance

ResourceId Resource URI

SKU_s SQL Managed Instance product SKU

virtual_core_count_s Number of vCores available

avg_cpu_percent_s Average CPU percentage

reserved_storage_mb_s Reserved storage capacity on the managed instance

storage_space_used_mb_s Used storage on the managed instance

io_requests_s IOPS count

io_bytes_read_s IOPS bytes read

io_bytes_written_s IOPS bytes written

Query Store runtime statistics

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure


P RO P ERT Y DESC RIP T IO N

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: QueryStoreRuntimeStatistics

OperationName Name of the operation. Always:


QueryStoreRuntimeStatisticsEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

query_hash_s Query hash

query_plan_hash_s Query plan hash

statement_sql_handle_s Statement sql handle

interval_start_time_d Start datetimeoffset of the interval in number of ticks from


1900-1-1

interval_end_time_d End datetimeoffset of the interval in number of ticks from


1900-1-1

logical_io_writes_d Total number of logical IO writes

max_logical_io_writes_d Max number of logical IO writes per execution

physical_io_reads_d Total number of physical IO reads

max_physical_io_reads_d Max number of logical IO reads per execution

logical_io_reads_d Total number of logical IO reads

max_logical_io_reads_d Max number of logical IO reads per execution


P RO P ERT Y DESC RIP T IO N

execution_type_d Execution type

count_executions_d Number of executions of the query

cpu_time_d Total CPU time consumed by the query in microseconds

max_cpu_time_d Max CPU time consumer by a single execution in


microseconds

dop_d Sum of degrees of parallelism

max_dop_d Max degree of parallelism used for single execution

rowcount_d Total number of rows returned

max_rowcount_d Max number of rows returned in single execution

query_max_used_memory_d Total amount of memory used in KB

max_query_max_used_memory_d Max amount of memory used by a single execution in KB

duration_d Total execution time in microseconds

max_duration_d Max execution time of a single execution

num_physical_io_reads_d Total number of physical reads

max_num_physical_io_reads_d Max number of physical reads per execution

log_bytes_used_d Total amount of log bytes used

max_log_bytes_used_d Max amount of log bytes used per execution

query_id_d ID of the query in Query Store

plan_id_d ID of the plan in Query Store

Learn more about Query Store runtime statistics data.


Query Store wait statistics

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics


P RO P ERT Y DESC RIP T IO N

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: QueryStoreWaitStatistics

OperationName Name of the operation. Always:


QueryStoreWaitStatisticsEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

wait_category_s Category of the wait

is_parameterizable_s Is the query parameterizable

statement_type_s Type of the statement

statement_key_hash_s Statement key hash

exec_type_d Type of execution

total_query_wait_time_ms_d Total wait time of the query on the specific wait category

max_query_wait_time_ms_d Max wait time of the query in individual execution on the


specific wait category

query_param_type_d 0

query_hash_s Query hash in Query Store

query_plan_hash_s Query plan hash in Query Store

statement_sql_handle_s Statement handle in Query Store

interval_start_time_d Start datetimeoffset of the interval in number of ticks from


1900-1-1

interval_end_time_d End datetimeoffset of the interval in number of ticks from


1900-1-1
P RO P ERT Y DESC RIP T IO N

count_executions_d Count of executions of the query

query_id_d ID of the query in Query Store

plan_id_d ID of the plan in Query Store

Learn more about Query Store wait statistics data.


Errors dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: Errors

OperationName Name of the operation. Always: ErrorEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

Message Error message in plain text

user_defined_b Is the error user defined bit

error_number_d Error code

Severity Severity of the error

state_d State of the error


P RO P ERT Y DESC RIP T IO N

query_hash_s Query hash of the failed query, if available

query_plan_hash_s Query plan hash of the failed query, if available

Learn more about SQL error messages.


Database wait statistics dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: DatabaseWaitStatistics

OperationName Name of the operation. Always: DatabaseWaitStatisticsEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

wait_type_s Name of the wait type

start_utc_date_t [UTC] Measured period start time

end_utc_date_t [UTC] Measured period end time

delta_max_wait_time_ms_d Max waited time per execution

delta_signal_wait_time_ms_d Total signals wait time

delta_wait_time_ms_d Total wait time in the period


P RO P ERT Y DESC RIP T IO N

delta_waiting_tasks_count_d Number of waiting tasks

Learn more about database wait statistics.


Time-outs dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: Timeouts

OperationName Name of the operation. Always: TimeoutEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

error_state_d A numeric state value associated with the query timeout (an
attention event)

query_hash_s Query hash, if available

query_plan_hash_s Query plan hash, if available

Blockings dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure


P RO P ERT Y DESC RIP T IO N

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: Blocks

OperationName Name of the operation. Always: BlockEvent

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

lock_mode_s Lock mode used by the query

resource_owner_type_s Owner of the lock

blocked_process_filtered_s Blocked process report XML

duration_d Duration of the lock in microseconds

Deadlocks dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: Deadlocks

OperationName Name of the operation. Always: DeadlockEvent


P RO P ERT Y DESC RIP T IO N

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI

deadlock_xml_s Deadlock report XML

Automatic tuning dataset

P RO P ERT Y DESC RIP T IO N

TenantId Your tenant ID

SourceSystem Always: Azure

TimeGenerated [UTC] Time stamp when the log was recorded

Type Always: AzureDiagnostics

ResourceProvider Name of the resource provider. Always: MICROSOFT.SQL

Category Name of the category. Always: AutomaticTuning

Resource Name of the resource

ResourceType Name of the resource type. Always: SERVERS/DATABASES

SubscriptionId Subscription GUID for the database

ResourceGroup Name of the resource group for the database

LogicalServerName_s Name of the server for the database

LogicalDatabaseName_s Name of the database

ElasticPoolName_s Name of the elastic pool for the database, if any

DatabaseName_s Name of the database

ResourceId Resource URI


P RO P ERT Y DESC RIP T IO N

RecommendationHash_s Unique hash of Automatic tuning recommendation

OptionName_s Automatic tuning operation

Schema_s Database schema

Table_s Table affected

IndexName_s Index name

IndexColumns_s Column name

IncludedColumns_s Columns included

EstimatedImpact_s Estimated impact of Automatic tuning recommendation


JSON

Event_s Type of Automatic tuning event

Timestamp_t Last updated timestamp

Intelligent Insights dataset


Learn more about the Intelligent Insights log format.

Next steps
To learn how to enable logging and to understand the metrics and log categories supported by the various
Azure services, see:
Overview of metrics in Microsoft Azure
Overview of Azure platform logs
To learn about Event Hubs, read:
What is Azure Event Hubs?
Get started with Event Hubs
To learn how to set up alerts based on telemetry from log analytics see:
Creating alerts for Azure SQL Database and Azure SQL Managed Instance
Monitor Azure SQL Database with Azure Monitor
7/12/2022 • 6 minutes to read • Edit Online

When you have critical applications and business processes relying on Azure resources, you want to monitor
those resources for their availability, performance, and operation.
This article describes the monitoring data generated by Azure SQL Database. Azure SQL Database can be
monitored by Azure Monitor. If you are unfamiliar with the features of Azure Monitor common to all Azure
services that use it, read Monitoring Azure resources with Azure Monitor.

Monitoring overview page in Azure portal


View your Azure Monitor metrics for all connected resources by going to the Azure Monitor page directly in the
Azure Portal. Or, on the Over view page of an Azure SQL DB, click on Metrics under the Monitoring heading to
reach Azure Monitor.

Azure Monitor SQL Insights (preview)


Some services in Azure have a focused, pre-built monitoring dashboard in the Azure portal that can be enabled
to provide a starting point for monitoring your service. These special dashboards are called "insights" and are
not enabled by default. For more on using Azure Monitor SQL Insights for all products in the Azure SQL family,
see Monitor your SQL deployments with SQL Insights (preview).
After creating a monitoring profile, you can configure your Azure Monitor SQL Insights for SQL-specific metrics
for Azure SQL Database, SQL Managed Instance, and Azure VMs running SQL Server.

NOTE
Azure SQL Analytics (preview) is an integration with Azure Monitor, where many monitoring solutions are no longer in
active development. For more monitoring options, see Monitoring and performance tuning in Azure SQL Database and
Azure SQL Managed Instance.

Monitoring data
Azure SQL Database collects the same kinds of monitoring data as other Azure resources that are described in
Monitoring data from Azure resources.
See Monitoring Azure SQL Database with Azure Monitor reference for detailed information on the metrics and
logs metrics created by Azure SQL Database.

Collection and routing


Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations
by using a diagnostic setting.
Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more
locations. Resource logs were previously referred to as diagnostic logs.
Diagnostic settings available include:
log : SQLInsights, AutomaticTuning, QueryStoreRuntimeStatistics, QueryStoreWaitStatistics, Errors,
DatabaseWaitStatistics, Timeouts, Blocks, Deadlocks
metric : All Azure Monitor metrics in the Basic and InstanceAndAppAdvanced categories
destination details : Send to Log Analytics workspace, Archive to a storage account, Stream to an event hub,
Send to partner solution
For more information on these options, see Create diagnostic settings in Azure portal.
For more information on the resource logs and diagnostics available, see Diagnostic telemetry for export.
See Create diagnostic setting to collect platform logs and metrics in Azure for the detailed process for creating a
diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify
which categories of logs to collect. The categories for Azure SQL Database are listed in Azure SQL Database
monitoring data reference.

Analyzing metrics
You can analyze metrics for Azure SQL Database with metrics from other Azure services using metrics explorer
by opening Metrics from the Azure Monitor menu. See Getting started with Azure Metrics Explorer for details
on using this tool.
For a list of the platform metrics collected for Azure SQL Database, see Monitoring Azure SQL Database data
reference metrics
For reference, you can see a list of all resource metrics supported in Azure Monitor.

Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. This data is
optionally collected via Diagnostic settings.
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema
is outlined in Azure Monitor resource log schema.
The Activity log is a type of platform log in Azure that provides insight into subscription-level events. You can
view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using
Log Analytics.
For a list of the types of resource logs collected for Azure SQL Database, see Monitoring Azure SQL Database
data reference.
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see Monitoring Azure SQL
Database data reference.
Sample Kusto queries

IMPORTANT
When you select Logs from the Monitoring menu of an Azure SQL database, Log Analytics is opened with the query
scope set to the current database. This means that log queries will only include data from that resource. If you want to
run a query that includes data from other databases or data from other Azure services, select Logs from the Azure
Monitor menu. See Log query scope and time range in Azure Monitor Log Analytics for details.

Following are queries that you can use to help you monitor your database. You may see different options
available depending on your purchase model.
Example A: Log_write_percent from the past hour
AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('log_write_percent')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize Log_Maximum_last60mins = max(Maximum), Log_Minimum_last60mins = min(Minimum),
Log_Average_last60mins = avg(Average) by Resource, MetricName

Example B: SQL Ser ver wait types from the past 15 minutes

AzureDiagnostics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(15min)
| parse _ResourceId with * "/microsoft.sql/servers/" LogicalServerName "/databases/" DatabaseName
| summarize Total_count_15mins = sum(delta_waiting_tasks_count_d) by LogicalServerName, DatabaseName,
wait_type_s

Example C: SQL Ser ver deadlocks from the past 60 minutes

AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('deadlock')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize Deadlock_max_60Mins = max(Maximum) by Resource, MetricName

Example D: Avg CPU usage from the past hour

AzureMetrics
| where ResourceProvider == "MICROSOFT.SQL"
| where TimeGenerated >= ago(60min)
| where MetricName in ('cpu_percent')
| parse _ResourceId with * "/microsoft.sql/servers/" Resource
| summarize CPU_Maximum_last60mins = max(Maximum), CPU_Minimum_last60mins = min(Minimum),
CPU_Average_last60mins = avg(Average) by Resource, MetricName

Alerts
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data.
These metrics in Azure Monitor are always collected. They allow you to identify and address issues in your
system before your customers notice them. You can set alerts on metrics, logs, and the activity log.
If you are creating or running an application in Azure, Azure Monitor Application Insights may offer additional
types of alerts.
The following table lists common and recommended alert rules for Azure SQL Database. You may see different
options available depending on your purchase model.

SIGN A L N A M E O P ERATO R A GGREGAT IO N T Y P E T H RESH O L D VA L UE DESC RIP T IO N

DTU Percentage Greater than Average 80 Whenever the


average DTU
percentage is greater
than 80%
SIGN A L N A M E O P ERATO R A GGREGAT IO N T Y P E T H RESH O L D VA L UE DESC RIP T IO N

Log IO percentage Greater than Average 80 Whenever the


average log io
percentage is greater
than 80%

Deadlocks* Greater than Count 1 Whenever the count


of deadlocks is
greater than 1.

CPU percentage Greater than Average 80 Whenever the


average cpu
percentage is greater
than 80%

* Alerting on deadlocks may be unnecessary and noisy in some applications where deadlocks are expected and
properly handled.

Next steps
See Monitoring Azure SQL Database data reference for a reference of the metrics, logs, and other important
values created by Azure SQL Database.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Monitoring Azure SQL Database data reference
7/12/2022 • 2 minutes to read • Edit Online

This article contains reference for monitoring Azure SQL Database with Azure Monitor. See Monitoring Azure
SQL Database for details on collecting and analyzing monitoring data for Azure SQL Database with Azure
Monitor SQL Insights (preview).

Metrics
For more on using Azure Monitor SQL Insights (preview) for all products in the Azure SQL family, see Monitor
your SQL deployments with SQL Insights (preview).
For data specific to Azure SQL Database, see Data for Azure SQL Database.
For a complete list of metrics, see:
Microsoft.Sql/servers/databases
Microsoft.Sql/managedInstances
Microsoft.Sql/servers/elasticPools

Resource logs
This section lists the types of resource logs you can collect for Azure SQL Database.
For reference, see a list of all resource logs category types supported in Azure Monitor.
For a reference of resource log types collected for Azure SQL Database, see Streaming export of Azure SQL
Database Diagnostic telemetry for export

Azure Monitor Logs tables


This section refers to all of the Azure Monitor Logs tables relevant to Azure SQL Database and available for
query by Log Analytics, which can be queried with KQL.
Tables for all resources types are referenced here, for example, Azure Monitor tables for SQL Databases.

RESO URC E T Y P E N OT ES

AzureActivity Entries from the Azure Activity log that provides insight into
any subscription-level or management group level events
that have occurred in Azure.

AzureDiagnostics Azure Diagnostics reveals diagnostic data of specific


resources and features for numerous Azure products
including SQL databases, SQL elastic pools, and SQL
managed instances. For more information, see Diagnostics
metrics.

AzureMetrics Metric data emitted by Azure services that measure their


health and performance. Activity from Azure products
including SQL databases, SQL elastic pools, and SQL
managed instances.
Activity log
The Activity log contains records of management operations performed on your Azure SQL Database resources.
All maintenance operations related to Azure SQL Database that have been implemented here may appear in the
Activity log.
For more information on the schema of Activity Log entries, see Activity Log schema.

Next steps
See Monitoring Azure SQL Database with Azure Monitor for a description of monitoring Azure SQL
Database.
See Monitoring Azure resources with Azure Monitor for details on monitoring Azure resources.
Use In-Memory OLTP to improve your application
performance in Azure SQL Database and Azure
SQL Managed Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In-Memory OLTP can be used to improve the performance of transaction processing, data ingestion, and
transient data scenarios, in Premium and Business Critical tier databases without increasing the pricing tier.

NOTE
Learn how Quorum doubles key database's workload while lowering DTU by 70% with Azure SQL Database

Follow these steps to adopt In-Memory OLTP in your existing database.

Step 1: Ensure you are using a Premium and Business Critical tier
database
In-Memory OLTP is supported only in Premium and Business Critical tier databases. In-Memory is supported if
the returned result is 1 (not 0):

SELECT DatabasePropertyEx(Db_Name(), 'IsXTPSupported');

XTP stands for Extreme Transaction Processing

Step 2: Identify objects to migrate to In-Memory OLTP


SSMS includes a Transaction Performance Analysis Over view report that you can run against a database
with an active workload. The report identifies tables and stored procedures that are candidates for migration to
In-Memory OLTP.
In SSMS, to generate the report:
In the Object Explorer , right-click your database node.
Click Repor ts > Standard Repor ts > Transaction Performance Analysis Over view .
For more information, see Determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP.

Step 3: Create a comparable test database


Suppose the report indicates your database has a table that would benefit from being converted to a memory-
optimized table. We recommend that you first test to confirm the indication by testing.
You need a test copy of your production database. The test database should be at the same service tier level as
your production database.
To ease testing, tweak your test database as follows:
1. Connect to the test database by using SSMS.
2. To avoid needing the WITH (SNAPSHOT) option in queries, set the database option as shown in the
following T-SQL statement:

ALTER DATABASE CURRENT


SET
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;

Step 4: Migrate tables


You must create and populate a memory-optimized copy of the table you want to test. You can create it by using
either:
The handy Memory Optimization Wizard in SSMS.
Manual T-SQL.
Memory Optimization Wizard in SSMS
To use this migration option:
1. Connect to the test database with SSMS.
2. In the Object Explorer , right-click on the table, and then click Memor y Optimization Advisor .
The Table Memor y Optimizer Advisor wizard is displayed.
3. In the wizard, click Migration validation (or the Next button) to see if the table has any unsupported
features that are unsupported in memory-optimized tables. For more information, see:
The memory optimization checklist in Memory Optimization Advisor.
Transact-SQL Constructs Not Supported by In-Memory OLTP.
Migrating to In-Memory OLTP.
4. If the table has no unsupported features, the advisor can perform the actual schema and data migration
for you.
Manual T -SQL
To use this migration option:
1. Connect to your test database by using SSMS (or a similar utility).
2. Obtain the complete T-SQL script for your table and its indexes.
In SSMS, right-click your table node.
Click Script Table As > CREATE To > New Quer y Window .
3. In the script window, add WITH (MEMORY_OPTIMIZED = ON) to the CREATE TABLE statement.
4. If there is a CLUSTERED index, change it to NONCLUSTERED.
5. Rename the existing table by using SP_RENAME.
6. Create the new memory-optimized copy of the table by running your edited CREATE TABLE script.
7. Copy the data to your memory-optimized table by using INSERT...SELECT * INTO:

INSERT INTO [<new_memory_optimized_table>]


SELECT * FROM [<old_disk_based_table>];
Step 5 (optional): Migrate stored procedures
The In-Memory feature can also modify a stored procedure for improved performance.
Considerations with natively compiled stored procedures
A natively compiled stored procedure must have the following options on its T-SQL WITH clause:
NATIVE_COMPILATION
SCHEMABINDING: meaning tables that the stored procedure cannot have their column definitions changed
in any way that would affect the stored procedure, unless you drop the stored procedure.
A native module must use one big ATOMIC blocks for transaction management. There is no role for an explicit
BEGIN TRANSACTION, or for ROLLBACK TRANSACTION. If your code detects a violation of a business rule, it can
terminate the atomic block with a THROW statement.
Typical CREATE PROCEDURE for natively compiled
Typically the T-SQL to create a natively compiled stored procedure is similar to the following template:

CREATE PROCEDURE schemaname.procedurename


@param1 type1, …
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC WITH
(TRANSACTION ISOLATION LEVEL = SNAPSHOT,
LANGUAGE = N'your_language__see_sys.languages'
)

END;

For the TRANSACTION_ISOLATION_LEVEL, SNAPSHOT is the most common value for the natively
compiled stored procedure. However, a subset of the other values is also supported:
REPEATABLE READ
SERIALIZABLE
The LANGUAGE value must be present in the sys.languages view.
How to migrate a stored procedure
The migration steps are:
1. Obtain the CREATE PROCEDURE script to the regular interpreted stored procedure.
2. Rewrite its header to match the previous template.
3. Ascertain whether the stored procedure T-SQL code uses any features that are not supported for natively
compiled stored procedures. Implement workarounds if necessary.
For details see Migration Issues for Natively Compiled Stored Procedures.
4. Rename the old stored procedure by using SP_RENAME. Or simply DROP it.
5. Run your edited CREATE PROCEDURE T-SQL script.

Step 6: Run your workload in test


Run a workload in your test database that is similar to the workload that runs in your production database. This
should reveal the performance gain achieved by your use of the In-Memory feature for tables and stored
procedures.
Major attributes of the workload are:
Number of concurrent connections.
Read/write ratio.
To tailor and run the test workload, consider using the handy ostress.exe tool, which illustrated in this in-
memory article.
To minimize network latency, run your test in the same Azure geographic region where the database exists.

Step 7: Post-implementation monitoring


Consider monitoring the performance effects of your In-Memory implementations in production:
Monitor In-Memory storage.
Monitoring using dynamic management views

Related links
In-Memory OLTP (In-Memory Optimization)
Introduction to Natively Compiled Stored Procedures
Memory Optimization Advisor
In-Memory sample
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In-Memory technologies in Azure SQL Database enable you to improve performance of your application, and
potentially reduce cost of your database. By using In-Memory technologies in Azure SQL Database, you can
achieve performance improvements with various workloads.
In this article you'll see two samples that illustrate the use of In-Memory OLTP, as well as columnstore indexes in
Azure SQL Database.
For more information, see:
In-Memory OLTP Overview and Usage Scenarios (includes references to customer case studies and
information to get started)
Documentation for In-Memory OLTP
Columnstore Indexes Guide
Hybrid transactional/analytical processing (HTAP), also known as real-time operational analytics

1. Install the In-Memory OLTP sample


You can create the AdventureWorksLT sample database with a few clicks in the Azure portal. Then, the steps in
this section explain how you can enrich your AdventureWorksLT database with In-Memory OLTP objects and
demonstrate performance benefits.
For a more simplistic, but more visually appealing performance demo for In-Memory OLTP, see:
Release: in-memory-oltp-demo-v1.0
Source code: in-memory-oltp-demo-source-code
Installation steps
1. In the Azure portal, create a Premium or Business Critical database on a server. Set the Source to the
AdventureWorksLT sample database. For detailed instructions, see Create your first database in Azure
SQL Database.
2. Connect to the database with SQL Server Management Studio (SSMS.exe).
3. Copy the In-Memory OLTP Transact-SQL script to your clipboard. The T-SQL script creates the necessary
In-Memory objects in the AdventureWorksLT sample database that you created in step 1.
4. Paste the T-SQL script into SSMS, and then execute the script. The MEMORY_OPTIMIZED = ON clause CREATE
TABLE statements are crucial. For example:

CREATE TABLE [SalesLT].[SalesOrderHeader_inmem](


[SalesOrderID] int IDENTITY NOT NULL PRIMARY KEY NONCLUSTERED ...,
...
) WITH (MEMORY_OPTIMIZED = ON);

Error 40536
If you get error 40536 when you run the T-SQL script, run the following T-SQL script to verify whether the
database supports In-Memory:

SELECT DatabasePropertyEx(DB_Name(), 'IsXTPSupported');

A result of 0 means that In-Memory isn't supported, and 1 means that it is supported. To diagnose the problem,
ensure that the database is at the Premium service tier.
About the created memory-optimized items
Tables : The sample contains the following memory-optimized tables:
SalesLT.Product_inmem
SalesLT.SalesOrderHeader_inmem
SalesLT.SalesOrderDetail_inmem
Demo.DemoSalesOrderHeaderSeed
Demo.DemoSalesOrderDetailSeed
You can inspect memory-optimized tables through the Object Explorer in SSMS. Right-click Tables > Filter >
Filter Settings > Is Memor y Optimized . The value equals 1.
Or you can query the catalog views, such as:

SELECT is_memory_optimized, name, type_desc, durability_desc


FROM sys.tables
WHERE is_memory_optimized = 1;

Natively compiled stored procedure : You can inspect SalesLT.usp_InsertSalesOrder_inmem through a


catalog view query:

SELECT uses_native_compilation, OBJECT_NAME(object_id), definition


FROM sys.sql_modules
WHERE uses_native_compilation = 1;

Run the sample OLTP workload


The only difference between the following two stored procedures is that the first procedure uses memory-
optimized versions of the tables, while the second procedure uses the regular on-disk tables:
SalesLT**.usp_Inser tSalesOrder _inmem**
SalesLT**.usp_Inser tSalesOrder _ondisk**
In this section, you see how to use the handy ostress.exe utility to execute the two stored procedures at
stressful levels. You can compare how long it takes for the two stress runs to finish.
When you run ostress.exe, we recommend that you pass parameter values designed for both of the following:
Run a large number of concurrent connections, by using -n100.
Have each connection loop hundreds of times, by using -r500.
However, you might want to start with much smaller values like -n10 and -r50 to ensure that everything is
working.
Script for ostress.exe
This section displays the T-SQL script that is embedded in our ostress.exe command line. The script uses items
that were created by the T-SQL script that you installed earlier.
The following script inserts a sample sales order with five line items into the following memory-optimized
tables:
SalesLT.SalesOrderHeader_inmem
SalesLT.SalesOrderDetail_inmem

DECLARE
@i int = 0,
@od SalesLT.SalesOrderDetailType_inmem,
@SalesOrderID int,
@DueDate datetime2 = sysdatetime(),
@CustomerID int = rand() * 8000,
@BillToAddressID int = rand() * 10000,
@ShipToAddressID int = rand() * 10000;

INSERT INTO @od


SELECT OrderQty, ProductID
FROM Demo.DemoSalesOrderDetailSeed
WHERE OrderID= cast((rand()*60) as int);

WHILE (@i < 20)


begin;
EXECUTE SalesLT.usp_InsertSalesOrder_inmem @SalesOrderID OUTPUT,
@DueDate, @CustomerID, @BillToAddressID, @ShipToAddressID, @od;
SET @i = @i + 1;
end

To make the _ondisk version of the preceding T-SQL script for ostress.exe, you would replace both occurrences
of the _inmem substring with _ondisk. These replacements affect the names of tables and stored procedures.
Install RML utilities and ostress

Ideally, you would plan to run ostress.exe on an Azure virtual machine (VM). You would create an Azure VM in
the same Azure geographic region where your AdventureWorksLT database resides. But you can run ostress.exe
on your laptop instead.
On the VM, or on whatever host you choose, install the Replay Markup Language (RML) utilities. The utilities
include ostress.exe.
For more information, see:
The ostress.exe discussion in Sample Database for In-Memory OLTP.
Sample Database for In-Memory OLTP.
The blog for installing ostress.exe.
Run the _inmem stress workload first
You can use an RML Cmd Prompt window to run our ostress.exe command line. The command-line parameters
direct ostress to:
Run 100 connections concurrently (-n100).
Have each connection run the T-SQL script 50 times (-r50).

ostress.exe -n100 -r50 -S<servername>.database.windows.net -U<login> -P<password> -d<database> -q -Q"DECLARE


@i int = 0, @od SalesLT.SalesOrderDetailType_inmem, @SalesOrderID int, @DueDate datetime2 = sysdatetime(),
@CustomerID int = rand() * 8000, @BillToAddressID int = rand() * 10000, @ShipToAddressID int = rand()*
10000; INSERT INTO @od SELECT OrderQty, ProductID FROM Demo.DemoSalesOrderDetailSeed WHERE OrderID=
cast((rand()*60) as int); WHILE (@i < 20) begin; EXECUTE SalesLT.usp_InsertSalesOrder_inmem @SalesOrderID
OUTPUT, @DueDate, @CustomerID, @BillToAddressID, @ShipToAddressID, @od; set @i += 1; end"
To run the preceding ostress.exe command line:
1. Reset the database data content by running the following command in SSMS, to delete all the data that
was inserted by any previous runs:

EXECUTE Demo.usp_DemoReset;

2. Copy the text of the preceding ostress.exe command line to your clipboard.
3. Replace the <placeholders> for the parameters -S -U -P -d with the correct real values.
4. Run your edited command line in an RML Cmd window.
Result is a duration
When ostress.exe finishes, it writes the run duration as its final line of output in the RML Cmd window. For
example, a shorter test run lasted about 1.5 minutes:
11/12/15 00:35:00.873 [0x000030A8] OSTRESS exiting normally, elapsed time: 00:01:31.867

Reset, edit for _ondisk, then rerun


After you have the result from the _inmem run, perform the following steps for the _ondisk run:
1. Reset the database by running the following command in SSMS to delete all the data that was inserted by
the previous run:

EXECUTE Demo.usp_DemoReset;

2. Edit the ostress.exe command line to replace all _inmem with _ondisk.
3. Rerun ostress.exe for the second time, and capture the duration result.
4. Again, reset the database (for responsibly deleting what can be a large amount of test data).
Expected comparison results
Our In-Memory tests have shown that performance improved by nine times for this simplistic workload, with
ostress running on an Azure VM in the same Azure region as the database.

2. Install the In-Memory Analytics sample


In this section, you compare the IO and statistics results when you're using a columnstore index versus a
traditional b-tree index.
For real-time analytics on an OLTP workload, it's often best to use a nonclustered columnstore index. For details,
see Columnstore Indexes Described.
Prepare the columnstore analytics test
1. Use the Azure portal to create a fresh AdventureWorksLT database from the sample.
Use that exact name.
Choose any Premium service tier.
2. Copy the sql_in-memory_analytics_sample to your clipboard.
The T-SQL script creates the necessary In-Memory objects in the AdventureWorksLT sample database
that you created in step 1.
The script creates the Dimension table and two fact tables. The fact tables are populated with 3.5
million rows each.
The script might take 15 minutes to complete.
3. Paste the T-SQL script into SSMS, and then execute the script. The COLUMNSTORE keyword in the
CREATE INDEX statement is crucial, as in:
CREATE NONCLUSTERED COLUMNSTORE INDEX ...;

4. Set AdventureWorksLT to compatibility level 130:


ALTER DATABASE AdventureworksLT SET compatibility_level = 130;

Level 130 is not directly related to In-Memory features. But level 130 generally provides faster query
performance than 120.
Key tables and columnstore indexes
dbo.FactResellerSalesXL_CCI is a table that has a clustered columnstore index, which has advanced
compression at the data level.
dbo.FactResellerSalesXL_PageCompressed is a table that has an equivalent regular clustered index, which
is compressed only at the page level.
Key queries to compare the columnstore index
There are several T-SQL query types that you can run to see performance improvements. In step 2 in the T-SQL
script, pay attention to this pair of queries. They differ only on one line:
FROM FactResellerSalesXL_PageCompressed a
FROM FactResellerSalesXL_CCI a

A clustered columnstore index is in the FactResellerSalesXL_CCI table.


The following T-SQL script excerpt prints statistics for IO and TIME for the query of each table.
/*********************************************************************
Step 2 -- Overview
-- Page Compressed BTree table v/s Columnstore table performance differences
-- Enable actual Query Plan in order to see Plan differences when Executing
*/
-- Ensure Database is in 130 compatibility mode
ALTER DATABASE AdventureworksLT SET compatibility_level = 130
GO

-- Execute a typical query that joins the Fact Table with dimension tables
-- Note this query will run on the Page Compressed table, Note down the time
SET STATISTICS IO ON
SET STATISTICS TIME ON
GO

SELECT c.Year
,e.ProductCategoryKey
,FirstName + ' ' + LastName AS FullName
,count(SalesOrderNumber) AS NumSales
,sum(SalesAmount) AS TotalSalesAmt
,Avg(SalesAmount) AS AvgSalesAmt
,count(DISTINCT SalesOrderNumber) AS NumOrders
,count(DISTINCT a.CustomerKey) AS CountCustomers
FROM FactResellerSalesXL_PageCompressed a
INNER JOIN DimProduct b ON b.ProductKey = a.ProductKey
INNER JOIN DimCustomer d ON d.CustomerKey = a.CustomerKey
Inner JOIN DimProductSubCategory e on e.ProductSubcategoryKey = b.ProductSubcategoryKey
INNER JOIN DimDate c ON c.DateKey = a.OrderDateKey
GROUP BY e.ProductCategoryKey,c.Year,d.CustomerKey,d.FirstName,d.LastName
GO
SET STATISTICS IO OFF
SET STATISTICS TIME OFF
GO

-- This is the same Prior query on a table with a clustered columnstore index CCI
-- The comparison numbers are even more dramatic the larger the table is (this is an 11 million row table
only)
SET STATISTICS IO ON
SET STATISTICS TIME ON
GO
SELECT c.Year
,e.ProductCategoryKey
,FirstName + ' ' + LastName AS FullName
,count(SalesOrderNumber) AS NumSales
,sum(SalesAmount) AS TotalSalesAmt
,Avg(SalesAmount) AS AvgSalesAmt
,count(DISTINCT SalesOrderNumber) AS NumOrders
,count(DISTINCT a.CustomerKey) AS CountCustomers
FROM FactResellerSalesXL_CCI a
INNER JOIN DimProduct b ON b.ProductKey = a.ProductKey
INNER JOIN DimCustomer d ON d.CustomerKey = a.CustomerKey
Inner JOIN DimProductSubCategory e on e.ProductSubcategoryKey = b.ProductSubcategoryKey
INNER JOIN DimDate c ON c.DateKey = a.OrderDateKey
GROUP BY e.ProductCategoryKey,c.Year,d.CustomerKey,d.FirstName,d.LastName
GO

SET STATISTICS IO OFF


SET STATISTICS TIME OFF
GO

In a database with the P2 pricing tier, you can expect about nine times the performance gain for this query by
using the clustered columnstore index compared with the traditional index. With P15, you can expect about 57
times the performance gain by using the columnstore index.
Next steps
Quickstart 1: In-Memory OLTP Technologies for faster T-SQL Performance
Use In-Memory OLTP in an existing Azure SQL application
Monitor In-Memory OLTP storage for In-Memory OLTP

Additional resources
Deeper information
Learn how Quorum doubles key database's workload while lowering DTU by 70% with In-Memory OLTP
in Azure SQL Database
In-Memory OLTP in Azure SQL Database Blog Post
Learn about In-Memory OLTP
Learn about columnstore indexes
Learn about real-time operational analytics
See Common Workload Patterns and Migration Considerations (which describes workload patterns
where In-Memory OLTP commonly provides significant performance gains)
Application design
In-Memory OLTP (In-Memory Optimization)
Use In-Memory OLTP in an existing Azure SQL application
Tools
Azure portal
SQL Server Management Studio (SSMS)
SQL Server Data Tools (SSDT)
Monitor In-Memory OLTP storage in Azure SQL
Database and Azure SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


When using In-Memory OLTP, data in memory-optimized tables and table variables resides in In-Memory OLTP
storage.

Determine whether data fits within the In-Memory OLTP storage cap
Determine the storage caps of the different service tiers. Each Premium and Business Critical service tier has a
maximum In-Memory OLTP storage size.
DTU-based resource limits - single database
DTU-based resource limits - elastic pools
vCore-based resource limits - single databases
vCore-based resource limits - elastic pools
vCore-based resource limits - managed instance
Estimating memory requirements for a memory-optimized table works the same way for SQL Server as it does
in Azure SQL Database and Azure SQL Managed Instance. Take a few minutes to review Estimate memory
requirements.
Table and table variable rows, as well as indexes, count toward the max user data size. In addition, ALTER TABLE
needs enough room to create a new version of the entire table and its indexes.
Once this limit is exceeded, insert and update operations may start failing with error 41823 for single databases
in Azure SQL Database and databases in Azure SQL Managed Instance, and error 41840 for elastic pools in
Azure SQL Database. At that point you need to either delete data to reclaim memory, or upgrade the service tier
or compute size of your database.

Monitoring and alerting


You can monitor In-memory storage use as a percentage of the storage cap for your compute size in the Azure
portal:
1. On the Database blade, locate the Resource utilization box and click on Edit.
2. Select the metric In-Memory OLTP Storage percentage .
3. To add an alert, click on the Resource Utilization box to open the Metric blade, then click on Add alert.
Or use the following query to show the In-memory storage utilization:

SELECT xtp_storage_percent FROM sys.dm_db_resource_stats

Correct out-of-In-Memory OLTP storage situations - Errors 41823 and


41840
Hitting the In-Memory OLTP storage cap in your database results in INSERT, UPDATE, ALTER and CREATE
operations failing with error message 41823 (for single databases) or error 41840 (for elastic pools). Both errors
cause the active transaction to abort.
Error messages 41823 and 41840 indicate that the memory-optimized tables and table variables in the
database or pool have reached the maximum In-Memory OLTP storage size.
To resolve this error, either:
Delete data from the memory-optimized tables, potentially offloading the data to traditional, disk-based
tables; or,
Upgrade the service tier to one with enough in-memory storage for the data you need to keep in memory-
optimized tables.

NOTE
In rare cases, errors 41823 and 41840 can be transient, meaning there is enough available In-Memory OLTP storage, and
retrying the operation succeeds. We therefore recommend to both monitor the overall available In-Memory OLTP
storage and to retry when first encountering error 41823 or 41840. For more information about retry logic, see Conflict
Detection and Retry Logic with In-Memory OLTP.

Next steps
For monitoring guidance, see Monitoring using dynamic management views.
Quickstart: Import a BACPAC file to a database in
Azure SQL Database or Azure SQL Managed
Instance
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can import a SQL Server database into Azure SQL Database or SQL Managed Instance using a BACPAC file.
You can import the data from a BACPAC file stored in Azure Blob storage (standard storage only) or from local
storage in an on-premises location. To maximize import speed by providing more and faster resources, scale
your database to a higher service tier and compute size during the import process. You can then scale down
after the import is successful.

NOTE
The imported database's compatibility level is based on the source database's compatibility level.

IMPORTANT
After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the
AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for
operating a database at a specific compatibility level, see ALTER DATABASE Compatibility Level. See also ALTER DATABASE
SCOPED CONFIGURATION for information about additional database-level settings related to compatibility levels.

NOTE
Import and Export using Private Link is in preview.

Using Azure portal


Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:

The Azure portal only supports creating a single database in Azure SQL Database and only from a BACPAC file
stored in Azure Blob storage.
To migrate a database into an Azure SQL Managed Instance from a BACPAC file, use SQL Server Management
Studio or SQLPackage, using the Azure portal or Azure PowerShell is not currently supported.
NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to 3 times the size of the database.
Machines running the import/export request only have 450GB local disk space. As a result, some requests may fail with
the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a machine
with enough local disk space. We encourage using SqlPackage to import/export databases larger than 150GB to avoid
this issue.

1. To import from a BACPAC file into a new single database using the Azure portal, open the appropriate
server page and then, on the toolbar, select Impor t database .

2. Select the storage account and the container for the BACPAC file and then select the BACPAC file from
which to import.
3. Specify the new database size (usually the same as origin) and provide the destination SQL Server
credentials. For a list of possible values for a new database in Azure SQL Database, see Create Database.
4. Click OK .
5. To monitor an import's progress, open the database's server page, and, under Settings , select
Impor t/Expor t histor y . When successful, the import has a Completed status.
6. To verify the database is live on the server, select SQL databases and verify the new database is Online .

Using SqlPackage
To import a SQL Server database using the SqlPackage command-line utility, see import parameters and
properties. SQL Server Management Studio and SQL Server Data Tools for Visual Studio include SqlPackage.
You can also download the latest SqlPackage from the Microsoft download center.
For scale and performance, we recommend using SqlPackage in most production environments rather than
using the Azure portal. For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see
migrating from SQL Server to Azure SQL Database using BACPAC Files.
The DTU based provisioning model supports select database max size values for each tier. When importing a
database use one of these supported values.
The following SqlPackage command imports the AdventureWorks2008R2 database from local storage to a
logical SQL server named mynewser ver20170403 . It creates a new database called myMigratedDatabase
with a Premium service tier and a P6 Service Objective. Change these values as appropriate for your
environment.

sqlpackage.exe /a:import /tcs:"Data Source=<serverName>.database.windows.net;Initial Catalog=


<migratedDatabase>;User Id=<userId>;Password=<password>" /sf:AdventureWorks2008R2.bacpac
/p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6

IMPORTANT
To connect to Azure SQL Database from behind a corporate firewall, the firewall must have port 1433 open. To connect to
SQL Managed Instance, you must have a point-to-site connection or an express route connection.

This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.database.windows.net
/ua:True /tid:"apptest.onmicrosoft.com"

Using PowerShell
NOTE
A SQL Managed Instance does not currently support migrating a database into an instance database from a BACPAC file
using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage.

NOTE
The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as
well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly
among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request
only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In
this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting
databases larger than 150GB, use SqlPackage to avoid this issue.

PowerShell
Azure CLI

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.

Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.

$importRequest = New-AzSqlDatabaseImport -ResourceGroupName "<resourceGroupName>" `


-ServerName "<serverName>" -DatabaseName "<databaseName>" `
-DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
-StorageKey $(Get-AzStorageAccountKey `
-ResourceGroupName "<resourceGroupName>" -StorageAccountName "<storageAccountName>").Value[0] `
-StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
-Edition "Standard" -ServiceObjectiveName "P6" `
-AdministratorLogin "<userId>" `
-AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)

You can use the Get-AzSqlDatabaseImportExportStatus cmdlet to check the import's progress. Running the
cmdlet immediately after the request usually returns Status: InProgress . The import is complete when you see
Status: Succeeded .
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink $importRequest.OperationStatusLink

[Console]::Write("Importing")
while ($importStatus.Status -eq "InProgress") {
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink
$importRequest.OperationStatusLink
[Console]::Write(".")
Start-Sleep -s 10
}

[Console]::WriteLine("")
$importStatus

TIP
For another script example, see Import a database from a BACPAC file.

Cancel the import request


Use the Database Operations - Cancel API or the PowerShell Stop-AzSqlDatabaseActivity command, here an
example of powershell command.

Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName


$DatabaseName -OperationId $Operation.OperationId

Limitations
Importing to a database in elastic pool isn't supported. You can import data into a single database and then
move the database to an elastic pool.
Import Export Service does not work when Allow access to Azure services is set to OFF. However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DACFx API.
Import does not support specifying a backup storage redundancy while creating a new database and creates
with the default geo-redundant backup storage redundancy. To workaround, first create an empty database
with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into
this empty database.
Storage behind a firewall is currently not supported.

Import using wizards


You can also use these wizards.
Import Data-tier Application Wizard in SQL Server Management Studio.
SQL Server Import and Export Wizard.

Next steps
To learn how to connect to and query a database in Azure SQL Database, see Quickstart: Azure SQL
Database: Use SQL Server Management Studio to connect to and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Export to a BACPAC file - Azure SQL Database and
Azure SQL Managed Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


When you need to export a database for archiving or for moving to another platform, you can export the
database schema and data to a BACPAC file. A BACPAC file is a ZIP file with an extension of BACPAC containing
the metadata and data from the database. A BACPAC file can be stored in Azure Blob storage or in local storage
in an on-premises location and later imported back into Azure SQL Database, Azure SQL Managed Instance, or a
SQL Server instance.

Considerations
For an export to be transactionally consistent, you must ensure either that no write activity is occurring
during the export, or that you are exporting from a transactionally consistent copy of your database.
If you are exporting to blob storage, the maximum size of a BACPAC file is 200 GB. To archive a larger
BACPAC file, export to local storage.
Exporting a BACPAC file to Azure premium storage using the methods discussed in this article is not
supported.
Storage behind a firewall is currently not supported.
Immutable storage is currently not supported.
Storage file name or the input value for StorageURI should be fewer than 128 characters long and cannot
end with '.' and cannot contain special characters like a space character or '<,>,*,%,&,:,,/,?'.
If the export operation exceeds 20 hours, it may be canceled. To increase performance during export, you
can:
Temporarily increase your compute size.
Cease all read and write activity during the export.
Use a clustered index with non-null values on all large tables. Without clustered indexes, an export
may fail if it takes longer than 6-12 hours. This is because the export service needs to complete a table
scan to try to export entire table. A good way to determine if your tables are optimized for export is to
run DBCC SHOW_STATISTICS and make sure that the RANGE_HI_KEY is not null and its value has
good distribution. For details, see DBCC SHOW_STATISTICS.
Azure SQL Managed Instance does not currently support exporting a database to a BACPAC file using the
Azure portal or Azure PowerShell. To export a managed instance into a BACPAC file, use SQL Server
Management Studio (SSMS) or SQLPackage.
For databases in the Hyperscale service tier, BACPAC export/import from Azure portal, from PowerShell
using New-AzSqlDatabaseExport or New-AzSqlDatabaseImport, from Azure CLI using az sql db export
and az sql db import, and from REST API is not supported. BACPAC import/export for smaller Hyperscale
databases (up to 200 GB) is supported using SSMS and SQLPackage version 18.4 and later. For larger
databases, BACPAC export/import may take a long time, and may fail for various reasons.
NOTE
BACPACs are not intended to be used for backup and restore operations. Azure automatically creates backups for every
user database. For details, see business continuity overview and SQL Database backups.

NOTE
Import and Export using Private Link is in preview.

The Azure portal


Exporting a BACPAC of a database from Azure SQL Managed Instance or from a database in the Hyperscale
service tier using the Azure portal is not currently supported. See Considerations.

NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to three times the size of the
database. Machines running the import/export request only have 450GB local disk space. As a result, some requests may
fail with the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a
machine with enough local disk space. We encourage using SQLPackage to import/export databases larger than 150GB
to avoid this issue.

1. To export a database using the Azure portal, open the page for your database and select Expor t on the
toolbar.

2. Specify the BACPAC filename, select an existing Azure storage account and container for the export, and
then provide the appropriate credentials for access to the source database. A SQL Ser ver admin login
is needed here even if you are the Azure admin, as being an Azure admin does not equate to having
admin permissions in Azure SQL Database or Azure SQL Managed Instance.
3. Select OK .
4. To monitor the progress of the export operation, open the page for the server containing the database
being exported. Under Data management , select Impor t/Expor t histor y .

SQLPackage utility
We recommend the use of the SQLPackage utility for scale and performance in most production environments.
You can run multiple sqlpackage.exe commands in parallel for subsets of tables to speed up import/export
operations.
To export a database in SQL Database using the SQLPackage command-line utility, see Export parameters and
properties. The SQLPackage utility ships with the latest versions of SQL Server Management Studio and SQL
Server Data Tools for Visual Studio, or you can download the latest version of SQLPackage directly from the
Microsoft download center.
This example shows how to export a database using sqlpackage.exe with Active Directory Universal
Authentication:

sqlpackage.exe /a:Export /tf:testExport.BACPAC /scs:"Data Source=apptestserver.database.windows.net;Initial


Catalog=MyDB;" /ua:True /tid:"apptest.onmicrosoft.com"

SQL Server Management Studio (SSMS)


The newest versions of SQL Server Management Studio provide a wizard to export a database in Azure SQL
Database or a SQL Managed Instance database to a BACPAC file. See the Export a Data-tier Application.
PowerShell
Exporting a BACPAC of a database from Azure SQL Managed Instance or from a database in the Hyperscale
service tier using PowerShell is not currently supported. See Considerations.
Use the New-AzSqlDatabaseExport cmdlet to submit an export database request to the Azure SQL Database
service. Depending on the size of your database, the export operation may take some time to complete.

$exportRequest = New-AzSqlDatabaseExport -ResourceGroupName $ResourceGroupName -ServerName $ServerName `


-DatabaseName $DatabaseName -StorageKeytype $StorageKeytype -StorageKey $StorageKey -StorageUri $BacpacUri
`
-AdministratorLogin $creds.UserName -AdministratorLoginPassword $creds.Password

To check the status of the export request, use the Get-AzSqlDatabaseImportExportStatus cmdlet. Running this
cmdlet immediately after the request usually returns Status: InProgress . When you see Status: Succeeded
the export is complete.

$exportStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink $exportRequest.OperationStatusLink


[Console]::Write("Exporting")
while ($exportStatus.Status -eq "InProgress")
{
Start-Sleep -s 10
$exportStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink
$exportRequest.OperationStatusLink
[Console]::Write(".")
}
[Console]::WriteLine("")
$exportStatus

Cancel the export request


Use the Database Operations - Cancel API or the PowerShell Stop-AzSqlDatabaseActivity command to cancel an
export request. Here is an example PowerShell command:

Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName


$DatabaseName -OperationId $Operation.OperationId

Next steps
To learn about long-term backup retention of a single database and pooled databases as an alternative to
exporting a database for archive purposes, see Long-term backup retention. You can use SQL Agent jobs to
schedule copy-only database backups as an alternative to long-term backup retention.
To learn about importing a BACPAC to a SQL Server database, see Import a BACPAC to a SQL Server
database.
To learn about exporting a BACPAC from a SQL Server database, see Export a Data-tier Application
To learn about using the Data Migration Service to migrate a database, see Migrate from SQL Server to
Azure SQL Database offline using DMS.
If you are exporting from SQL Server as a prelude to migration to Azure SQL Database, see Migrate a SQL
Server database to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Move resources to new region - Azure SQL
Database & Azure SQL Managed Instance
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article teaches you a generic workflow for how to move your database or managed instance to a new
region.

Overview
There are various scenarios in which you'd want to move your existing database or managed instance from one
region to another. For example, you're expanding your business to a new region and want to optimize it for the
new customer base. Or you need to move the operations to a different region for compliance reasons. Or Azure
released a new region that provides a better proximity and improves the customer experience.
This article provides a general workflow for moving resources to a different region. The workflow consists of the
following steps:
1. Verify the prerequisites for the move.
2. Prepare to move the resources in scope.
3. Monitor the preparation process.
4. Test the move process.
5. Initiate the actual move.
6. Remove the resources from the source region.

NOTE
This article applies to migrations within the Azure public cloud or within the same sovereign cloud.

NOTE
To move Azure SQL databases and elastic pools to a different Azure region, you can also use Azure Resource Mover
(Recommended). Refer this tutorial for detailed steps to do the same.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Move a database
Verify prerequisites
1. Create a target server for each source server.
2. Configure the firewall with the right exceptions by using PowerShell.
3. Configure the servers with the correct logins. If you're not the subscription administrator or SQL server
administrator, work with the administrator to assign the permissions that you need. For more
information, see How to manage Azure SQL Database security after disaster recovery.
4. If your databases are encrypted with transparent data encryption (TDE) and bring your own encryption
key (BYOK or Customer-Managed Key) in Azure Key Vault, ensure that the correct encryption material is
provisioned in the target regions.
The simplest way to do this is to add the encryption key from the existing key vault (that is being used
as TDE Protector on source server) to the target server and then set the key as the TDE Protector on
the target server

NOTE
A server or managed instance in one region can now be connected to a key vault in any other region.

As a best practice to ensure the target server has access to older encryption keys (required for
restoring database backups), run the Get-AzSqlServerKeyVaultKey cmdlet on the source server or Get-
AzSqlInstanceKeyVaultKey cmdlet on the source managed instance to return the list of available keys
and add those keys to the target server.
For more information and best practices on configuring customer-managed TDE on the target server,
see Azure SQL transparent data encryption with customer-managed keys in Azure Key Vault.
To move the key vault to the new region, see Move an Azure key vault across regions
5. If database-level audit is enabled, disable it and enable server-level auditing instead. After failover,
database-level auditing will require the cross-region traffic, which isn't desired or possible after the move.
6. For server-level audits, ensure that:
The storage container, Log Analytics, or event hub with the existing audit logs is moved to the target
region.
Auditing is configured on the target server. For more information, see Get started with SQL Database
auditing.
7. If your instance has a long-term retention policy (LTR), the existing LTR backups will remain associated
with the current server. Because the target server is different, you'll be able to access the older LTR
backups in the source region by using the source server, even if the server is deleted.

NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require
moving the LTR backups to the target server, which is not currently supported.

Prepare resources
1. Create a failover group between the server of the source and the server of the target.
2. Add the databases you want to move to the failover group.
Replication of all added databases will be initiated automatically. For more information, see Using failover
groups with SQL Database.
Monitor the preparation process
You can periodically call Get-AzSqlDatabaseFailoverGroup to monitor replication of your databases from the
source to the target. The output object of Get-AzSqlDatabaseFailoverGroup includes a property for the
ReplicationState :
ReplicationState = 2 (CATCH_UP) indicates the database is synchronized and can be safely failed over.
ReplicationState = 0 (SEEDING) indicates that the database is not yet seeded, and an attempt to fail over
will fail.
Test synchronization
After ReplicationState is 2, connect to each database or subset of databases using the secondary endpoint
<fog-name>.secondary.database.windows.net and perform any query against the databases to ensure connectivity,
proper security configuration, and data replication.
Initiate the move
1. Connect to the target server using the secondary endpoint <fog-name>.secondary.database.windows.net .
2. Use Switch-AzSqlDatabaseFailoverGroup to switch the secondary managed instance to be the primary with
full synchronization. This operation will succeed or it will roll back.
3. Verify that the command has completed successfully by using
nslook up <fog-name>.secondary.database.windows.net to ascertain that the DNS CNAME entry points to the
target region IP address. If the switch command fails, the CNAME won't be updated.
Remove the source databases
Once the move completes, remove the resources in the source region to avoid unnecessary charges.
1. Delete the failover group using Remove-AzSqlDatabaseFailoverGroup.
2. Delete each source database using Remove-AzSqlDatabase for each of the databases on the source server.
This will automatically terminate geo-replication links.
3. Delete the source server using Remove-AzSqlServer.
4. Remove the key vault, audit storage containers, event hub, Azure Active Directory (Azure AD) instance, and
other dependent resources to stop being billed for them.

Move elastic pools


Verify prerequisites
1. Create a target server for each source server.
2. Configure the firewall with the right exceptions using PowerShell.
3. Configure the servers with the correct logins. If you're not the subscription administrator or server
administrator, work with the administrator to assign the permissions that you need. For more
information, see How to manage Azure SQL Database security after disaster recovery.
4. If your databases are encrypted with transparent data encryption and use your own encryption key in
Azure Key Vault, ensure that the correct encryption material is provisioned in the target region.
5. Create a target elastic pool for each source elastic pool, making sure the pool is created in the same
service tier, with the same name and the same size.
6. If a database-level audit is enabled, disable it and enable server-level auditing instead. After failover,
database-level auditing will require cross-region traffic, which is not desired, or possible after the move.
7. For server-level audits, ensure that:
The storage container, Log Analytics, or event hub with the existing audit logs is moved to the target
region.
Audit configuration is configured at the target server. For more information, see SQL Database
auditing.
8. If your instance has a long-term retention policy (LTR), the existing LTR backups will remain associated
with the current server. Because the target server is different, you'll be able to access the older LTR
backups in the source region using the source server, even if the server is deleted.

NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require
moving the LTR backups to the target server, which is not currently supported.

Prepare to move
1. Create a separate failover group between each elastic pool on the source server and its counterpart
elastic pool on the target server.
2. Add all the databases in the pool to the failover group.
Replication of the added databases will be initiated automatically. For more information, see Using
failover groups with SQL Database.

NOTE
While it is possible to create a failover group that includes multiple elastic pools, we strongly recommend that you
create a separate failover group for each pool. If you have a large number of databases across multiple elastic
pools that you need to move, you can run the preparation steps in parallel and then initiate the move step in
parallel. This process will scale better and will take less time compared to having multiple elastic pools in the same
failover group.

Monitor the preparation process


You can periodically call Get-AzSqlDatabaseFailoverGroup to monitor replication of your databases from the
source to the target. The output object of Get-AzSqlDatabaseFailoverGroup includes a property for the
ReplicationState :
ReplicationState = 2 (CATCH_UP) indicates the database is synchronized and can be safely failed over.
ReplicationState = 0 (SEEDING) indicates that the database is not yet seeded, and an attempt to fail over
will fail.
Test synchronization
Once ReplicationState is , connect to each database or subset of databases using the secondary endpoint
2
<fog-name>.secondary.database.windows.net and perform any query against the databases to ensure connectivity,
proper security configuration, and data replication.
Initiate the move
1. Connect to the target server using the secondary endpoint <fog-name>.secondary.database.windows.net .
2. Use Switch-AzSqlDatabaseFailoverGroup to switch the secondary managed instance to be the primary with
full synchronization. This operation will either succeed, or it will roll back.
3. Verify that the command has completed successfully by using
nslook up <fog-name>.secondary.database.windows.net to ascertain that the DNS CNAME entry points to the
target region IP address. If the switch command fails, the CNAME won't be updated.
Remove the source elastic pools
Once the move completes, remove the resources in the source region to avoid unnecessary charges.
1. Delete the failover group using Remove-AzSqlDatabaseFailoverGroup.
2. Delete each source elastic pool on the source server using Remove-AzSqlElasticPool.
3. Delete the source server using Remove-AzSqlServer.
4. Remove the key vault, audit storage containers, event hub, Azure AD instance, and other dependent
resources to stop being billed for them.

Move a managed instance


Verify prerequisites
1. For each source managed instance, create a target instance of SQL Managed Instance of the same size in the
target region.
2. Configure the network for a managed instance. For more information, see network configuration.
3. Configure the target master database with the correct logins. If you're not the subscription or SQL Managed
Instance administrator, work with the administrator to assign the permissions that you need.
4. If your databases are encrypted with transparent data encryption and use your own encryption key in Azure
Key Vault, ensure that the Azure Key Vault with identical encryption keys exists in both source and target
regions. For more information, see Transparent data encryption with customer-managed keys in Azure Key
Vault.
5. If audit is enabled for the managed instance, ensure that:
The storage container or event hub with the existing logs is moved to the target region.
Audit is configured on the target instance. For more information, see Auditing with SQL Managed
Instance.
6. If your instance has a long-term retention policy (LTR), the existing LTR backups will remain associated with
the current instance. Because the target instance is different, you'll be able to access the older LTR backups in
the source region using the source instance, even if the instance is deleted.

NOTE
This will be insufficient for moving between the sovereign cloud and a public region. Such a migration will require moving
the LTR backups to the target instance, which is not currently supported.

Prepare resources
Create a failover group between each source managed instance and the corresponding target instance of SQL
Managed Instance.
Replication of all databases on each instance will be initiated automatically. For more information, see Auto-
failover groups.
Monitor the preparation process
You can periodically call Get-AzSqlDatabaseFailoverGroup to monitor replication of your databases from the
source to the target. The output object of Get-AzSqlDatabaseFailoverGroup includes a property for the
ReplicationState :
ReplicationState = 2 (CATCH_UP) indicates the database is synchronized and can be safely failed over.
ReplicationState = 0 (SEEDING) indicates that the database isn't yet seeded, and an attempt to fail over will
fail.
Test synchronization
Once ReplicationState is , connect to each database, or subset of databases using the secondary endpoint
2
<fog-name>.secondary.database.windows.net and perform any query against the databases to ensure connectivity,
proper security configuration, and data replication.
Initiate the move
1. Connect to the target managed instance by using the secondary endpoint
<fog-name>.secondary.database.windows.net .
2. Use Switch-AzSqlDatabaseFailoverGroup to switch the secondary managed instance to be the primary with
full synchronization. This operation will succeed, or it will roll back.
3. Verify that the command has completed successfully by using
nslook up <fog-name>.secondary.database.windows.net to ascertain that the DNS CNAME entry points to the
target region IP address. If the switch command fails, the CNAME won't be updated.
Remove the source managed instances
Once the move finishes, remove the resources in the source region to avoid unnecessary charges.
1. Delete the failover group using Remove-AzSqlDatabaseFailoverGroup. This will drop the failover group
configuration and terminate geo-replication links between the two instances.
2. Delete the source managed instance using Remove-AzSqlInstance.
3. Remove any additional resources in the resource group, such as the virtual cluster, virtual network, and
security group.

Next steps
Manage your database after it has been migrated.
Application development overview - SQL Database
& SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This article walks through the basic considerations that a developer should be aware of when writing code to
connect to your database in Azure. This article applies to Azure SQL Database, and Azure SQL Managed
Instance.

Language and platform


You can use various programming languages and platforms to connect and query Azure SQL Database. You can
find sample applications that you can use to connect to the database.
You can leverage open-source tools like cheetah, sql-cli, VS Code. Additionally, Azure SQL Database works with
Microsoft tools like Visual Studio and SQL Server Management Studio. You can also use the Azure portal,
PowerShell, and REST APIs help you gain additional productivity.

Authentication
Access to Azure SQL Database is protected with logins and firewalls. Azure SQL Database supports both SQL
Server and Azure Active Directory authentication users and logins. Azure Active Directory logins are available
only in SQL Managed Instance.
Learn more about managing database access and login.

Connections
In your client connection logic, override the default timeout to be 30 seconds. The default of 15 seconds is too
short for connections that depend on the internet.
If you are using a connection pool, be sure to close the connection the instant your program is not actively using
it, and is not preparing to reuse it.
Avoid long-running transactions because any infrastructure or connection failure might roll back the
transaction. If possible, split the transaction in the multiple smaller transactions and use batching to improve
performance.

Resiliency
Azure SQL Database is a cloud service where you might expect transient errors that happen in the underlying
infrastructure or in the communication between cloud entities. Although Azure SQL Database is resilient on the
transitive infrastructure failures, these failures might affect your connectivity. When a transient error occurs
while connecting to SQL Database, your code should retry the call. We recommend that retry logic use backoff
logic, so that it does not overwhelm the service with multiple clients retrying simultaneously. Retry logic
depends on the error messages for SQL Database client programs.
For more information about how to prepare for planned maintenance events on your Azure SQL Database, see
planning for Azure maintenance events in Azure SQL Database.
Network considerations
On the computer that hosts your client program, ensure the firewall allows outgoing TCP communication on
port 1433. More information: Configure an Azure SQL Database firewall.
If your client program connects to SQL Database while your client runs on an Azure virtual machine (VM),
you must open certain port ranges on the VM. More information: Ports beyond 1433 for ADO.NET 4.5 and
SQL Database.
Client connections to Azure SQL Database sometimes bypass the proxy and interact directly with the
database. Ports other than 1433 become important. For more information, Azure SQL Database connectivity
architecture and Ports beyond 1433 for ADO.NET 4.5 and SQL Database.
For networking configuration for an instance of SQL Managed Instance, see network configuration for SQL
Managed Instance.

Next steps
Explore all the capabilities of SQL Database and SQL Managed Instance.
To get started, see the guides for Azure SQL Database and Azure SQL Managed Instances.
Getting started with JSON features in Azure SQL
Database and Azure SQL Managed Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance let you parse and query data represented in JavaScript
Object Notation (JSON) format, and export your relational data as JSON text. The following JSON scenarios are
available:
Formatting relational data in JSON format using FOR JSON clause.
Working with JSON data
Querying JSON data using JSON scalar functions.
Transforming JSON into tabular format using OPENJSON function.

Formatting relational data in JSON format


If you have a web service that takes data from the database layer and provides a response in JSON format, or
client-side JavaScript frameworks or libraries that accept data formatted as JSON, you can format your database
content as JSON directly in a SQL query. You no longer have to write application code that formats results from
Azure SQL Database or Azure SQL Managed Instance as JSON, or include some JSON serialization library to
convert tabular query results and then serialize objects to JSON format. Instead, you can use the FOR JSON
clause to format SQL query results as JSON and use it directly in your application.
In the following example, rows from the Sales.Customer table are formatted as JSON by using the FOR JSON
clause:

select CustomerName, PhoneNumber, FaxNumber


from Sales.Customers
FOR JSON PATH

The FOR JSON PATH clause formats the results of the query as JSON text. Column names are used as keys, while
the cell values are generated as JSON values:

[
{"CustomerName":"Eric Torres","PhoneNumber":"(307) 555-0100","FaxNumber":"(307) 555-0101"},
{"CustomerName":"Cosmina Vlad","PhoneNumber":"(505) 555-0100","FaxNumber":"(505) 555-0101"},
{"CustomerName":"Bala Dixit","PhoneNumber":"(209) 555-0100","FaxNumber":"(209) 555-0101"}
]

The result set is formatted as a JSON array where each row is formatted as a separate JSON object.
PATH indicates that you can customize the output format of your JSON result by using dot notation in column
aliases. The following query changes the name of the "CustomerName" key in the output JSON format, and puts
phone and fax numbers in the "Contact" sub-object:
select CustomerName as Name, PhoneNumber as [Contact.Phone], FaxNumber as [Contact.Fax]
from Sales.Customers
where CustomerID = 931
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER

The output of this query looks like this:

{
"Name":"Nada Jovanovic",
"Contact":{
"Phone":"(215) 555-0100",
"Fax":"(215) 555-0101"
}
}

In this example, we returned a single JSON object instead of an array by specifying the
WITHOUT_ARRAY_WRAPPER option. You can use this option if you know that you are returning a single object
as a result of query.
The main value of the FOR JSON clause is that it lets you return complex hierarchical data from your database
formatted as nested JSON objects or arrays. The following example shows how to include the rows from the
Orders table that belong to the Customer as a nested array of Orders :

select CustomerName as Name, PhoneNumber as Phone, FaxNumber as Fax,


Orders.OrderID, Orders.OrderDate, Orders.ExpectedDeliveryDate
from Sales.Customers Customer
join Sales.Orders Orders
on Customer.CustomerID = Orders.CustomerID
where Customer.CustomerID = 931
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER

Instead of sending separate queries to get Customer data and then to fetch a list of related Orders, you can get
all the necessary data with a single query, as shown in the following sample output:

{
"Name":"Nada Jovanovic",
"Phone":"(215) 555-0100",
"Fax":"(215) 555-0101",
"Orders":[
{"OrderID":382,"OrderDate":"2013-01-07","ExpectedDeliveryDate":"2013-01-08"},
{"OrderID":395,"OrderDate":"2013-01-07","ExpectedDeliveryDate":"2013-01-08"},
{"OrderID":1657,"OrderDate":"2013-01-31","ExpectedDeliveryDate":"2013-02-01"}
]
}

Working with JSON data


If you don't have strictly structured data, if you have complex sub-objects, arrays, or hierarchical data, or if your
data structures evolve over time, the JSON format can help you to represent any complex data structure.
JSON is a textual format that can be used like any other string type in Azure SQL Database and Azure SQL
Managed Instance. You can send or store JSON data as a standard NVARCHAR:
CREATE TABLE Products (
Id int identity primary key,
Title nvarchar(200),
Data nvarchar(max)
)
go
CREATE PROCEDURE InsertProduct(@title nvarchar(200), @json nvarchar(max))
AS BEGIN
insert into Products(Title, Data)
values(@title, @json)
END

The JSON data used in this example is represented by using the NVARCHAR(MAX) type. JSON can be inserted
into this table or provided as an argument of the stored procedure using standard Transact-SQL syntax as
shown in the following example:

EXEC InsertProduct 'Toy car', '{"Price":50,"Color":"White","tags":["toy","children","games"]}'

Any client-side language or library that works with string data in Azure SQL Database and Azure SQL Managed
Instance will also work with JSON data. JSON can be stored in any table that supports the NVARCHAR type,
such as a Memory-optimized table or a System-versioned table. JSON does not introduce any constraint either
in the client-side code or in the database layer.

Querying JSON data


If you have data formatted as JSON stored in Azure SQL tables, JSON functions let you use this data in any SQL
query.
JSON functions that are available in Azure SQL Database and Azure SQL Managed Instance let you treat data
formatted as JSON as any other SQL data type. You can easily extract values from the JSON text, and use JSON
data in any query:

select Id, Title, JSON_VALUE(Data, '$.Color'), JSON_QUERY(Data, '$.tags')


from Products
where JSON_VALUE(Data, '$.Color') = 'White'

update Products
set Data = JSON_MODIFY(Data, '$.Price', 60)
where Id = 1

The JSON_VALUE function extracts a value from JSON text stored in the Data column. This function uses a
JavaScript-like path to reference a value in JSON text to extract. The extracted value can be used in any part of
SQL query.
The JSON_QUERY function is similar to JSON_VALUE. Unlike JSON_VALUE, this function extracts complex sub-
object such as arrays or objects that are placed in JSON text.
The JSON_MODIFY function lets you specify the path of the value in the JSON text that should be updated, as
well as a new value that will overwrite the old one. This way you can easily update JSON text without reparsing
the entire structure.
Since JSON is stored in a standard text, there are no guarantees that the values stored in text columns are
properly formatted. You can verify that text stored in JSON column is properly formatted by using standard
Azure SQL Database check constraints and the ISJSON function:
ALTER TABLE Products
ADD CONSTRAINT [Data should be formatted as JSON]
CHECK (ISJSON(Data) > 0)

If the input text is properly formatted JSON, the ISJSON function returns the value 1. On every insert or update
of JSON column, this constraint will verify that new text value is not malformed JSON.

Transforming JSON into tabular format


Azure SQL Database and Azure SQL Managed Instance also let you transform JSON collections into tabular
format and load or query JSON data.
OPENJSON is a table-value function that parses JSON text, locates an array of JSON objects, iterates through the
elements of the array, and returns one row in the output result for each element of the array.

In the example above, we can specify where to locate the JSON array that should be opened (in the $.Orders
path), what columns should be returned as result, and where to find the JSON values that will be returned as
cells.
We can transform a JSON array in the @orders variable into a set of rows, analyze this result set, or insert rows
into a standard table:

CREATE PROCEDURE InsertOrders(@orders nvarchar(max))


AS BEGIN

insert into Orders(Number, Date, Customer, Quantity)


select Number, Date, Customer, Quantity
FROM OPENJSON (@orders)
WITH (
Number varchar(200),
Date datetime,
Customer varchar(200),
Quantity int
)
END

The collection of orders formatted as a JSON array and provided as a parameter to the stored procedure can be
parsed and inserted into the Orders table.
Accelerate real-time big data analytics using the
Spark connector
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance

NOTE
As of Sep 2020, this connector is not actively maintained. However, Apache Spark Connector for SQL Server and Azure
SQL is now available, with support for Python and R bindings, an easier-to use interface to bulk insert data, and many
other improvements. We strongly encourage you to evaluate and use the new connector instead of this one. The
information about the old connector (this page) is only retained for archival purposes.

The Spark connector enables databases in Azure SQL Database, Azure SQL Managed Instance, and SQL Server
to act as the input data source or output data sink for Spark jobs. It allows you to utilize real-time transactional
data in big data analytics and persist results for ad hoc queries or reporting. Compared to the built-in JDBC
connector, this connector provides the ability to bulk insert data into your database. It can outperform row-by-
row insertion with 10x to 20x faster performance. The Spark connector supports Azure Active Directory (Azure
AD) authentication to connect to Azure SQL Database and Azure SQL Managed Instance, allowing you to
connect your database from Azure Databricks using your Azure AD account. It provides similar interfaces with
the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector.

Download and build a Spark connector


The GitHub repo for the old connector previously linked to from this page is not actively maintained. Instead, we
strongly encourage you to evaluate and use the new connector.
Official supported versions
C O M P O N EN T VERSIO N

Apache Spark 2.0.2 or later

Scala 2.10 or later

Microsoft JDBC Driver for SQL Server 6.2 or later

Microsoft SQL Server SQL Server 2008 or later

Azure SQL Database Supported

Azure SQL Managed Instance Supported

The Spark connector utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker
nodes and databases:
The dataflow is as follows:
1. The Spark master node connects to databases in SQL Database or SQL Server and loads data from a specific
table or using a specific SQL query.
2. The Spark master node distributes data to worker nodes for transformation.
3. The Worker node connects to databases that connect to SQL Database and SQL Server and writes data to the
database. User can choose to use row-by-row insertion or bulk insert.
The following diagram illustrates the data flow.

Build the Spark connector


Currently, the connector project uses maven. To build the connector without dependencies, you can run:
mvn clean package
Download the latest versions of the JAR from the release folder
Include the SQL Database Spark JAR

Connect and read data using the Spark connector


You can connect to databases in SQL Database and SQL Server from a Spark job to read or write data. You can
also run a DML or DDL query in databases in SQL Database and SQL Server.
Read data from Azure SQL and SQL Server

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"dbTable" -> "dbo.Clients",
"user" -> "username",
"password" -> "*********",
"connectTimeout" -> "5", //seconds
"queryTimeout" -> "5" //seconds
))

val collection = sqlContext.read.sqlDB(config)


collection.show()

Read data from Azure SQL and SQL Server with specified SQL query
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"queryCustom" -> "SELECT TOP 100 * FROM dbo.Clients WHERE PostalCode = 98074" //Sql query
"user" -> "username",
"password" -> "*********",
))

//Read all data in table dbo.Clients


val collection = sqlContext.read.sqlDB(config)
collection.show()

Write data to Azure SQL and SQL Server

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

// Aquire a DataFrame collection (val collection)

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"dbTable" -> "dbo.Clients",
"user" -> "username",
"password" -> "*********"
))

import org.apache.spark.sql.SaveMode
collection.write.mode(SaveMode.Append).sqlDB(config)

Run DML or DDL query in Azure SQL and SQL Server

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.query._
val query = """
|UPDATE Customers
|SET ContactName = 'Alfred Schmidt', City = 'Frankfurt'
|WHERE CustomerID = 1;
""".stripMargin

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"user" -> "username",
"password" -> "*********",
"queryCustom" -> query
))

sqlContext.sqlDBQuery(config)

Connect from Spark using Azure AD authentication


You can connect to Azure SQL Database and SQL Managed Instance using Azure AD authentication. Use Azure
AD authentication to centrally manage identities of database users and as an alternative to SQL Server
authentication.
Connecting using ActiveDirectoryPassword Authentication Mode
Setup requirement
If you are using the ActiveDirectoryPassword authentication mode, you need to download azure-
activedirectory-library-for-java and its dependencies, and include them in the Java build path.

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"user" -> "username",
"password" -> "*********",
"authentication" -> "ActiveDirectoryPassword",
"encrypt" -> "true"
))

val collection = sqlContext.read.sqlDB(config)


collection.show()

Connecting using an access token


Setup requirement
If you are using the access token-based authentication mode, you need to download azure-activedirectory-
library-for-java and its dependencies, and include them in the Java build path.
See Use Azure Active Directory Authentication for authentication to learn how to get an access token to your
database in Azure SQL Database or Azure SQL Managed Instance.

import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

val config = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"accessToken" -> "access_token",
"hostNameInCertificate" -> "*.database.windows.net",
"encrypt" -> "true"
))

val collection = sqlContext.read.sqlDB(config)


collection.show()

Write data using bulk insert


The traditional jdbc connector writes data into your database using row-by-row insertion. You can use the Spark
connector to write data to Azure SQL and SQL Server using bulk insert. It significantly improves the write
performance when loading large data sets or loading data into tables where a column store index is used.
import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._

/**
Add column Metadata.
If not specified, metadata is automatically added
from the destination table, which may suffer performance.
*/
var bulkCopyMetadata = new BulkCopyMetadata
bulkCopyMetadata.addColumnMetadata(1, "Title", java.sql.Types.NVARCHAR, 128, 0)
bulkCopyMetadata.addColumnMetadata(2, "FirstName", java.sql.Types.NVARCHAR, 50, 0)
bulkCopyMetadata.addColumnMetadata(3, "LastName", java.sql.Types.NVARCHAR, 50, 0)

val bulkCopyConfig = Config(Map(


"url" -> "mysqlserver.database.windows.net",
"databaseName" -> "MyDatabase",
"user" -> "username",
"password" -> "*********",
"dbTable" -> "dbo.Clients",
"bulkCopyBatchSize" -> "2500",
"bulkCopyTableLock" -> "true",
"bulkCopyTimeout" -> "600"
))

df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)
//df.bulkCopyToSqlDB(bulkCopyConfig) if no metadata is specified.

Next steps
If you haven't already, download the Spark connector from azure-sqldb-spark GitHub repository and explore the
additional resources in the repo:
Sample Azure Databricks notebooks
Sample scripts (Scala)
You might also want to review the Apache Spark SQL, DataFrames, and Datasets Guide and the Azure Databricks
documentation.
Use Java and JDBC with Azure SQL Database
7/12/2022 • 9 minutes to read • Edit Online

This topic demonstrates creating a sample application that uses Java and JDBC to store and retrieve information
in Azure SQL Database.
JDBC is the standard Java API to connect to traditional relational databases.

Prerequisites
An Azure account. If you don't have one, get a free trial.
Azure Cloud Shell or Azure CLI. We recommend Azure Cloud Shell so you'll be logged in automatically and
have access to all the tools you'll need.
A supported Java Development Kit, version 8 (included in Azure Cloud Shell).
The Apache Maven build tool.

Prepare the working environment


We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize
the following configuration for your specific needs.
Set up those environment variables by using the following commands:

AZ_RESOURCE_GROUP=database-workshop
AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
AZ_LOCATION=<YOUR_AZURE_REGION>
AZ_SQL_SERVER_USERNAME=demo
AZ_SQL_SERVER_PASSWORD=<YOUR_AZURE_SQL_PASSWORD>
AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>

Replace the placeholders with the following values, which are used throughout this article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server. It should be unique across Azure.
<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default, but we recommend that
you configure a region closer to where you live. You can have the full list of available regions by entering
az account list-locations .
<AZ_SQL_SERVER_PASSWORD> : The password of your Azure SQL Database server. That password should have a
minimum of eight characters. The characters should be from three of the following categories: English
uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and
so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll run your Java application.
One convenient way to find it is to point your browser to whatismyip.akamai.com.
Next, create a resource group using the following command:

az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
| jq
NOTE
We use the jq utility to display JSON data and make it more readable. This utility is installed by default on Azure Cloud
Shell. If you don't like that utility, you can safely remove the | jq part of all the commands we'll use.

Create an Azure SQL Database instance


The first thing we'll create is a managed Azure SQL Database server.

NOTE
You can read more detailed information about creating Azure SQL Database servers in Quickstart: Create an Azure SQL
Database single database.

In Azure Cloud Shell, run the following command:

az sql server create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_USERNAME \
--admin-password $AZ_SQL_SERVER_PASSWORD \
| jq

This command creates an Azure SQL Database server.


Configure a firewall rule for your Azure SQL Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't allow any incoming
connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to
access the database server.
Because you configured our local IP address at the beginning of this article, you can open the server's firewall by
running the following command:

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \
--end-ip-address $AZ_LOCAL_IP_ADDRESS \
| jq

Configure a Azure SQL database


The Azure SQL Database server that you created earlier is empty. It doesn't have any database that you can use
with the Java application. Create a new database called demo by running the following command:

az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
| jq

Create a new Java project


Using your favorite IDE, create a new Java project, and add a pom.xml file in its root directory:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>demo</name>

<properties>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>

<dependencies>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>7.4.1.jre8</version>
</dependency>
</dependencies>
</project>

This file is an Apache Maven that configures our project to use:


Java 8
A recent SQL Server driver for Java
Prepare a configuration file to connect to Azure SQL database
Create a src/main/resources/application.properties file, and add:

url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCerti
ficate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
user=demo@$AZ_DATABASE_NAME
password=$AZ_SQL_SERVER_PASSWORD

Replace the two $AZ_DATABASE_NAME variables with the value that you configured at the beginning of this
article.
Replace the $AZ_SQL_SERVER_PASSWORD variable with the value that you configured at the beginning of this
article.
Create an SQL file to generate the database schema
We will use a src/main/resources/ schema.sql file in order to create a database schema. Create that file, with the
following content:

DROP TABLE IF EXISTS todo;


CREATE TABLE todo (id INT PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BIT);

Code the application


Connect to the database
Next, add the Java code that will use JDBC to store and retrieve data from your Azure SQL database.
Create a src/main/java/DemoApplication.java file, that contains:
package com.example.demo;

import java.sql.*;
import java.util.*;
import java.util.logging.Logger;

public class DemoApplication {

private static final Logger log;

static {
System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
log =Logger.getLogger(DemoApplication.class.getName());
}

public static void main(String[] args) throws Exception {


log.info("Loading application properties");
Properties properties = new Properties();

properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));

log.info("Connecting to the database");


Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
log.info("Database connection test: " + connection.getCatalog());

log.info("Create database schema");


Scanner scanner = new
Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
Statement statement = connection.createStatement();
while (scanner.hasNextLine()) {
statement.execute(scanner.nextLine());
}

/*
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
todo = readData(connection);
todo.setDetails("congratulations, you have updated data!");
updateData(todo, connection);
deleteData(todo, connection);
*/

log.info("Closing database connection");


connection.close();
}
}

This Java code will use the application.properties and the schema.sql files that we created earlier, in order to
connect to the SQL Server database and create a schema that will store our data.
In this file, you can see that we commented methods to insert, read, update and delete data: we will code those
methods in the rest of this article, and you will be able to uncomment them one after each other.

NOTE
The database credentials are stored in the user and password properties of the application.properties file. Those
credentials are used when executing DriverManager.getConnection(properties.getProperty("url"), properties); ,
as the properties file is passed as an argument.

You can now execute this main class with your favorite tool:
Using your IDE, you should be able to right-click on the DemoApplication class and execute it.
Using Maven, you can run the application by executing:
mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication" .

The application should connect to the Azure SQL Database, create a database schema, and then close the
connection, as you should see in the console logs:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Closing database connection

Create a domain class


Create a new Todo Java class, next to the DemoApplication class, and add the following code:
package com.example.demo;

public class Todo {

private Long id;


private String description;
private String details;
private boolean done;

public Todo() {
}

public Todo(Long id, String description, String details, boolean done) {


this.id = id;
this.description = description;
this.details = details;
this.done = done;
}

public Long getId() {


return id;
}

public void setId(Long id) {


this.id = id;
}

public String getDescription() {


return description;
}

public void setDescription(String description) {


this.description = description;
}

public String getDetails() {


return details;
}

public void setDetails(String details) {


this.details = details;
}

public boolean isDone() {


return done;
}

public void setDone(boolean done) {


this.done = done;
}

@Override
public String toString() {
return "Todo{" +
"id=" + id +
", description='" + description + '\'' +
", details='" + details + '\'' +
", done=" + done +
'}';
}
}

This class is a domain model mapped on the todo table that you created when executing the schema.sql script.
Insert data into Azure SQL database
In the src/main/java/DemoApplication.java file, after the main method, add the following method to insert data
into the database:

private static void insertData(Todo todo, Connection connection) throws SQLException {


log.info("Insert data");
PreparedStatement insertStatement = connection
.prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");

insertStatement.setLong(1, todo.getId());
insertStatement.setString(2, todo.getDescription());
insertStatement.setString(3, todo.getDetails());
insertStatement.setBoolean(4, todo.isDone());
insertStatement.executeUpdate();
}

You can now uncomment the two following lines in the main method:

Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Closing database connection

Reading data from Azure SQL database


Let's read the data previously inserted, to validate that our code works correctly.
In the src/main/java/DemoApplication.java file, after the insertData method, add the following method to read
data from the database:

private static Todo readData(Connection connection) throws SQLException {


log.info("Read data");
PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
ResultSet resultSet = readStatement.executeQuery();
if (!resultSet.next()) {
log.info("There is no data in the database!");
return null;
}
Todo todo = new Todo();
todo.setId(resultSet.getLong("id"));
todo.setDescription(resultSet.getString("description"));
todo.setDetails(resultSet.getString("details"));
todo.setDone(resultSet.getBoolean("done"));
log.info("Data read from the database: " + todo.toString());
return todo;
}

You can now uncomment the following line in the main method:

todo = readData(connection);

Executing the main class should now produce the following output:
[INFO ] Loading application properties
[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Closing database connection

Updating data in Azure SQL Database


Let's update the data we previously inserted.
Still in the src/main/java/DemoApplication.java file, after the readData method, add the following method to
update data inside the database:

private static void updateData(Todo todo, Connection connection) throws SQLException {


log.info("Update data");
PreparedStatement updateStatement = connection
.prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");

updateStatement.setString(1, todo.getDescription());
updateStatement.setString(2, todo.getDetails());
updateStatement.setBoolean(3, todo.isDone());
updateStatement.setLong(4, todo.getId());
updateStatement.executeUpdate();
readData(connection);
}

You can now uncomment the two following lines in the main method:

todo.setDetails("congratulations, you have updated data!");


updateData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Update data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have updated data!', done=true}
[INFO ] Closing database connection

Deleting data in Azure SQL database


Finally, let's delete the data we previously inserted.
Still in the src/main/java/DemoApplication.java file, after the updateData method, add the following method to
delete data inside the database:
private static void deleteData(Todo todo, Connection connection) throws SQLException {
log.info("Delete data");
PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
deleteStatement.setLong(1, todo.getId());
deleteStatement.executeUpdate();
readData(connection);
}

You can now uncomment the following line in the main method:

deleteData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Update data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have updated data!', done=true}
[INFO ] Delete data
[INFO ] Read data
[INFO ] There is no data in the database!
[INFO ] Closing database connection

Conclusion and resources clean up


Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure SQL
database.
To clean up all resources used during this quickstart, delete the resource group using the following command:

az group delete \
--name $AZ_RESOURCE_GROUP \
--yes

Next steps
Design your first database in Azure SQL Database
Microsoft JDBC Driver for SQL Server
Report issues/ask questions
What is Azure SQL Database?
7/12/2022 • 19 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database is a fully managed platform as a service (PaaS) database engine that handles most of the
database management functions such as upgrading, patching, backups, and monitoring without user
involvement. Azure SQL Database is always running on the latest stable version of the SQL Server database
engine and patched OS with 99.99% availability. PaaS capabilities built into Azure SQL Database enable you to
focus on the domain-specific database administration and optimization activities that are critical for your
business.
With Azure SQL Database, you can create a highly available and high-performance data storage layer for the
applications and solutions in Azure. SQL Database can be the right choice for a variety of modern cloud
applications because it enables you to process both relational data and non-relational structures, such as graphs,
JSON, spatial, and XML.
Azure SQL Database is based on the latest stable version of the Microsoft SQL Server database engine. You can
use advanced query processing features, such as high-performance in-memory technologies and intelligent
query processing. In fact, the newest capabilities of SQL Server are released first to SQL Database, and then to
SQL Server itself. You get the newest SQL Server capabilities with no overhead for patching or upgrading, tested
across millions of databases.
SQL Database enables you to easily define and scale performance within two different purchasing models: a
vCore-based purchasing model and a DTU-based purchasing model. SQL Database is a fully managed service
that has built-in high availability, backups, and other common maintenance operations. Microsoft handles all
patching and updating of the SQL and operating system code. You don't have to manage the underlying
infrastructure.
If you're new to Azure SQL Database, check out the Azure SQL Database Overview video from our in-depth
Azure SQL video series:

Deployment models
Azure SQL Database provides the following deployment options for a database:
Single database represents a fully managed, isolated database. You might use this option if you have modern
cloud applications and microservices that need a single reliable data source. A single database is similar to a
contained database in the SQL Server database engine.
Elastic pool is a collection of single databases with a shared set of resources, such as CPU or memory. Single
databases can be moved into and out of an elastic pool.

IMPORTANT
To understand the feature differences between SQL Database, SQL Server, and Azure SQL Managed Instance, as well as
the differences among different Azure SQL Database options, see SQL Database features.

SQL Database delivers predictable performance with multiple resource types, service tiers, and compute sizes. It
provides dynamic scalability with no downtime, built-in intelligent optimization, global scalability and
availability, and advanced security options. These capabilities allow you to focus on rapid app development and
accelerating your time-to-market, rather than on managing virtual machines and infrastructure. SQL Database is
currently in 38 datacenters around the world, so you can run your database in a datacenter near you.

Scalable performance and pools


You can define the amount of resources assigned.
With single databases, each database is isolated from others and is portable. Each has its own guaranteed
amount of compute, memory, and storage resources. The amount of the resources assigned to the database
is dedicated to that database, and isn't shared with other databases in Azure. You can dynamically scale single
database resources up and down. The single database option provides different compute, memory, and
storage resources for different needs. For example, you can get 1 to 128 vCores, or 32 GB to 4 TB. The
Hyperscale service tier for single databases enables you to scale to 100 TB, with fast backup and restore
capabilities.
With elastic pools, you can assign resources that are shared by all databases in the pool. You can create a new
database, or move the existing single databases into a resource pool to maximize the use of resources and
save money. This option also gives you the ability to dynamically scale elastic pool resources up and down.
You can build your first app on a small, single database at a low cost per month in the General Purpose service
tier. You can then change its service tier manually or programmatically at any time to the Business Critical or
Hyperscale service tier, to meet the needs of your solution. You can adjust performance without downtime to
your app or to your customers. Dynamic scalability enables your database to transparently respond to rapidly
changing resource requirements. You pay for only the resources that you need when you need them.
Dynamic scalability is different from autoscale. Autoscale is when a service scales automatically based on
criteria, whereas dynamic scalability allows for manual scaling without downtime. The single database option
supports manual dynamic scalability, but not autoscale. For a more automatic experience, consider using elastic
pools, which allow databases to share resources in a pool based on individual database needs. Another option is
to use scripts that can help automate scalability for a single database. For an example, see Use PowerShell to
monitor and scale a single database.
Purchasing models
SQL Database offers the following purchasing models:
The vCore-based purchasing model lets you choose the number of vCores, the amount of memory, and
the amount and speed of storage. The vCore-based purchasing model also allows you to use Azure
Hybrid Benefit for SQL Server to gain cost savings. For more information about the Azure Hybrid Benefit,
see the Frequently asked questions section later in this article.
The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service
tiers, to support light to heavy database workloads. Compute sizes within each tier provide a different
mix of these resources, to which you can add additional storage resources.
Service tiers
Azure SQL Database offers three service tiers:
The General Purpose/Standard service tier is designed for common workloads. It offers budget-oriented
balanced compute and storage options.
The Business Critical/Premium service tier is designed for OLTP applications with high transaction rates and
low latency I/O requirements. It offers the highest resilience to failures by using several isolated replicas.
The Hyperscale service tier is designed for most business workloads. Hyperscale provides great flexibility and
high performance with independently scalable compute and storage resources. It offers higher resilience to
failures by allowing configuration of more than one isolated database replica.
Serverless compute
The serverless compute tier is available within the vCore-based purchasing model when you select the General
Purpose service tier.
The serverless compute tier automatically scales compute based on workload demand, and bills for the amount
of compute used per second. The serverless compute tier automatically pauses databases during inactive
periods when only storage is billed, and automatically resumes databases when activity returns.
Elastic pools to maximize resource utilization
For many businesses and applications, being able to create single databases and dial performance up or down
on demand is enough, especially if usage patterns are relatively predictable. Unpredictable usage patterns can
make it hard to manage costs and your business model. Elastic pools are designed to solve this problem. You
allocate performance resources to a pool rather than an individual database. You pay for the collective
performance resources of the pool rather than for single database performance.

With elastic pools, you don't need to focus on dialing database performance up and down as demand for
resources fluctuates. The pooled databases consume the performance resources of the elastic pool as needed.
Pooled databases consume but don't exceed the limits of the pool, so your cost remains predictable even if
individual database usage doesn't.
You can add and remove databases to the pool, scaling your app from a handful of databases to thousands, all
within a budget that you control. You can also control the minimum and maximum resources available to
databases in the pool, to ensure that no database in the pool uses all the pool resources, and that every pooled
database has a guaranteed minimum amount of resources. To learn more about design patterns for software as
a service (SaaS) applications that use elastic pools, see Design patterns for multi-tenant SaaS applications with
SQL Database.
Scripts can help with monitoring and scaling elastic pools. For an example, see Use PowerShell to monitor and
scale an elastic pool in Azure SQL Database.
Blend single databases with pooled databases
You can blend single databases with elastic pools, and change the service tiers of single databases and elastic
pools to adapt to your situation. You can also mix and match other Azure services with SQL Database to meet
your unique app design needs, drive cost and resource efficiencies, and unlock new business opportunities.

Extensive monitoring and alerting capabilities


Azure SQL Database provides advanced monitoring and troubleshooting features that help you get deeper
insights into workload characteristics. These features and tools include:
The built-in monitoring capabilities provided by the latest version of the SQL Server database engine. They
enable you to find real-time performance insights.
PaaS monitoring capabilities provided by Azure that enable you to monitor and troubleshoot a large number
of database instances.
Query Store, a built-in SQL Server monitoring feature, records the performance of your queries in real time, and
enables you to identify the potential performance issues and the top resource consumers. Automatic tuning and
recommendations provide advice regarding the queries with the regressed performance and missing or
duplicated indexes. Automatic tuning in SQL Database enables you to either manually apply the scripts that can
fix the issues, or let SQL Database apply the fix. SQL Database can also test and verify that the fix provides some
benefit, and retain or revert the change depending on the outcome. In addition to Query Store and automatic
tuning capabilities, you can use standard DMVs and XEvents to monitor the workload performance.
Azure provides built-in performance monitoring and alerting tools, combined with performance ratings, that
enable you to monitor the status of thousands of databases. Using these tools, you can quickly assess the impact
of scaling up or down, based on your current or projected performance needs. Additionally, SQL Database can
emit metrics and resource logs for easier monitoring. You can configure SQL Database to store resource usage,
workers and sessions, and connectivity into one of these Azure resources:
Azure Storage : For archiving vast amounts of telemetry for a small price.
Azure Event Hubs : For integrating SQL Database telemetry with your custom monitoring solution or hot
pipelines.
Azure Monitor logs : For a built-in monitoring solution with reporting, alerting, and mitigating capabilities.

Availability capabilities
Azure SQL Database enables your business to continue operating during disruptions. In a traditional SQL Server
environment, you generally have at least two machines locally set up. These machines have exact, synchronously
maintained, copies of the data to protect against a failure of a single machine or component. This environment
provides high availability, but it doesn't protect against a natural disaster destroying your datacenter.
Disaster recovery assumes that a catastrophic event is geographically localized enough to have another
machine or set of machines with a copy of your data far away. In SQL Server, you can use Always On Availability
Groups running in async mode to get this capability. People often don't want to wait for replication to happen
that far away before committing a transaction, so there's potential for data loss when you do unplanned
failovers.
Databases in the Premium and Business Critical service tiers already do something similar to the
synchronization of an availability group. Databases in lower service tiers provide redundancy through storage
by using a different but equivalent mechanism. Built-in logic helps protect against a single machine failure. The
active geo-replication feature gives you the ability to protect against disaster where a whole region is destroyed.
Azure Availability Zones tries to protect against the outage of a single datacenter building within a single region.
It helps you protect against the loss of power or network to a building. In SQL Database, you place the different
replicas in different availability zones (different buildings, effectively).
In fact, the service level agreement (SLA) of Azure, powered by a global network of Microsoft-managed
datacenters, helps keep your app running 24/7. The Azure platform fully manages every database, and it
guarantees no data loss and a high percentage of data availability. Azure automatically handles patching,
backups, replication, failure detection, underlying potential hardware, software or network failures, deploying
bug fixes, failovers, database upgrades, and other maintenance tasks. Standard availability is achieved by a
separation of compute and storage layers. Premium availability is achieved by integrating compute and storage
on a single node for performance, and then implementing technology similar to Always On Availability Groups.
For a full discussion of the high availability capabilities of Azure SQL Database, see SQL Database availability.
In addition, SQL Database provides built-in business continuity and global scalability features. These include:
Automatic backups:
SQL Database automatically performs full, differential, and transaction log backups of databases to
enable you to restore to any point in time. For single databases and pooled databases, you can configure
SQL Database to store full database backups to Azure Storage for long-term backup retention. For
managed instances, you can also perform copy-only backups for long-term backup retention.
Point-in-time restores:
All SQL Database deployment options support recovery to any point in time within the automatic backup
retention period for any database.
Active geo-replication:
The single database and pooled databases options allow you to configure up to four readable secondary
databases in either the same or globally distributed Azure datacenters. For example, if you have a SaaS
application with a catalog database that has a high volume of concurrent read-only transactions, use
active geo-replication to enable global read scale. This removes bottlenecks on the primary that are due
to read workloads. For managed instances, use auto-failover groups.
Auto-failover groups:
All SQL Database deployment options allow you to use failover groups to enable high availability and
load balancing at global scale. This includes transparent geo-replication and failover of large sets of
databases, elastic pools, and managed instances. Failover groups enable the creation of globally
distributed SaaS applications, with minimal administration overhead. This leaves all the complex
monitoring, routing, and failover orchestration to SQL Database.
Zone-redundant databases:
SQL Database allows you to provision Premium or Business Critical databases or elastic pools across
multiple availability zones. Because these databases and elastic pools have multiple redundant replicas
for high availability, placing these replicas into multiple availability zones provides higher resilience. This
includes the ability to recover automatically from the datacenter scale failures, without data loss.

Built-in intelligence
With SQL Database, you get built-in intelligence that helps you dramatically reduce the costs of running and
managing databases, and that maximizes both performance and security of your application. Running millions
of customer workloads around the clock, SQL Database collects and processes a massive amount of telemetry
data, while also fully respecting customer privacy. Various algorithms continuously evaluate the telemetry data
so that the service can learn and adapt with your application.
Automatic performance monitoring and tuning
SQL Database provides detailed insight into the queries that you need to monitor. SQL Database learns about
your database patterns, and enables you to adapt your database schema to your workload. SQL Database
provides performance tuning recommendations, where you can review tuning actions and apply them.
However, constantly monitoring a database is a hard and tedious task, especially when you're dealing with many
databases. Intelligent Insights does this job for you by automatically monitoring SQL Database performance at
scale. It informs you of performance degradation issues, it identifies the root cause of each issue, and it provides
performance improvement recommendations when possible.
Managing a huge number of databases might be impossible to do efficiently even with all available tools and
reports that SQL Database and Azure provide. Instead of monitoring and tuning your database manually, you
might consider delegating some of the monitoring and tuning actions to SQL Database by using automatic
tuning. SQL Database automatically applies recommendations, tests, and verifies each of its tuning actions to
ensure the performance keeps improving. This way, SQL Database automatically adapts to your workload in a
controlled and safe way. Automatic tuning means that the performance of your database is carefully monitored
and compared before and after every tuning action. If the performance doesn't improve, the tuning action is
reverted.
Many of our partners that run SaaS multi-tenant apps on top of SQL Database are relying on automatic
performance tuning to make sure their applications always have stable and predictable performance. For them,
this feature tremendously reduces the risk of having a performance incident in the middle of the night. In
addition, because part of their customer base also uses SQL Server, they're using the same indexing
recommendations provided by SQL Database to help their SQL Server customers.
Two automatic tuning aspects are available in SQL Database:
Automatic index management : Identifies indexes that should be added in your database, and indexes that
should be removed.
Automatic plan correction : Identifies problematic plans and fixes SQL plan performance problems.
Adaptive query processing
You can use adaptive query processing, including interleaved execution for multi-statement table-valued
functions, batch mode memory grant feedback, and batch mode adaptive joins. Each of these adaptive query
processing features applies similar "learn and adapt" techniques, helping further address performance issues
related to historically intractable query optimization problems.

Advanced security and compliance


SQL Database provides a range of built-in security and compliance features to help your application meet
various security and compliance requirements.

IMPORTANT
Microsoft has certified Azure SQL Database (all deployment options) against a number of compliance standards. For more
information, see the Microsoft Azure Trust Center, where you can find the most current list of SQL Database compliance
certifications.

Advanced threat protection


Microsoft Defender for SQL is a unified package for advanced SQL security capabilities. It includes functionality
for managing your database vulnerabilities, and detecting anomalous activities that might indicate a threat to
your database. It provides a single location for enabling and managing these capabilities.
Vulnerability assessment:
This service can discover, track, and help you remediate potential database vulnerabilities. It provides
visibility into your security state, and includes actionable steps to resolve security issues, and enhance
your database fortifications.
Threat detection:
This feature detects anomalous activities that indicate unusual and potentially harmful attempts to access
or exploit your database. It continuously monitors your database for suspicious activities, and provides
immediate security alerts on potential vulnerabilities, SQL injection attacks, and anomalous database
access patterns. Threat detection alerts provide details of the suspicious activity, and recommend action
on how to investigate and mitigate the threat.
Auditing for compliance and security
Auditing tracks database events and writes them to an audit log in your Azure storage account. Auditing can
help you maintain regulatory compliance, understand database activity, and gain insight into discrepancies and
anomalies that might indicate business concerns or suspected security violations.
Data encryption
SQL Database helps secure your data by providing encryption. For data in motion, it uses transport layer
security. For data at rest, it uses transparent data encryption. For data in use, it uses Always Encrypted.
Data discovery and classification
Data discovery and classification provides capabilities built into Azure SQL Database for discovering, classifying,
labeling, and protecting the sensitive data in your databases. It provides visibility into your database
classification state, and tracks the access to sensitive data within the database and beyond its borders.
Azure Active Directory integration and multi-factor authentication
SQL Database enables you to centrally manage identities of database user and other Microsoft services with
Azure Active Directory integration. This capability simplifies permission management and enhances security.
Azure Active Directory supports multi-factor authentication to increase data and application security, while
supporting a single sign-in process.

Easy-to-use tools
SQL Database makes building and maintaining applications easier and more productive. SQL Database allows
you to focus on what you do best: building great apps. You can manage and develop in SQL Database by using
tools and skills you already have.

TO O L DESC RIP T IO N

The Azure portal A web-based application for managing all Azure services.

Azure Data Studio A cross-platform database tool that runs on Windows,


macOS, and Linux.

SQL Server Management Studio A free, downloadable client application for managing any
SQL infrastructure, from SQL Server to SQL Database.
TO O L DESC RIP T IO N

SQL Server Data Tools in Visual Studio A free, downloadable client application for developing SQL
Server relational databases, databases in Azure SQL
Database, Integration Services packages, Analysis Services
data models, and Reporting Services reports.

Visual Studio Code A free, downloadable, open-source code editor for Windows,
macOS, and Linux. It supports extensions, including the
mssql extension for querying Microsoft SQL Server, Azure
SQL Database, and Azure Synapse Analytics.

SQL Database supports building applications with Python, Java, Node.js, PHP, Ruby, and .NET on macOS, Linux,
and Windows. SQL Database supports the same connection libraries as SQL Server.

Create and manage Azure SQL resources with the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL resources including your
SQL Server on Azure virtual machines (VMs).
To access the Azure SQL page, from the Azure portal menu, select Azure SQL or search for and select Azure
SQL in any page.

NOTE
Azure SQL provides a quick and easy way to access all of your SQL resources in the Azure portal, including single and
pooled databases in Azure SQL Database as well as the logical server hosting them, SQL Managed Instances, and SQL
Server on Azure VMs. Azure SQL is not a service or resource, but rather a family of SQL-related services.

To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select +
Create .

After selecting + Create , view additional information about the different options by selecting Show details on
any tile.
For details, see:
Create a single database
Create an elastic pool
Create a managed instance
Create a SQL virtual machine

SQL Database frequently asked questions


Can I control when patching downtime occurs?
The maintenance window feature allows you to configure predictable maintenance window schedules for
eligible databases in Azure SQL Database. Maintenance window advance notifications are available for
databases configured to use a non-default maintenance window.
How do I plan for maintenance events?
Patching is generally not noticeable if you employ retry logic in your app. For more information, see Planning
for Azure maintenance events in Azure SQL Database.

Engage with the SQL Server engineering team


DBA Stack Exchange: Ask database administration questions.
Stack Overflow: Ask development questions.
Microsoft Q&A question page: Ask technical questions.
Feedback: Report bugs and request features.
Reddit: Discuss SQL Server.

Next steps
See the pricing page for cost comparisons and calculators regarding single databases and elastic pools.
See these quickstarts to get started:
Create a database in the Azure portal
Create a database with the Azure CLI
Create a database using PowerShell
For a set of Azure CLI and PowerShell samples, see:
Azure CLI samples for SQL Database
Azure PowerShell samples for SQL Database
For information about new capabilities as they're announced, see Azure Roadmap for SQL Database.
See the Azure SQL Database blog, where SQL Server product team members blog about SQL Database
news and features.
What's new in Azure SQL Database?
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article summarizes the documentation changes associated with new features and improvements in the
recent releases of Azure SQL Database. To learn more about Azure SQL Database, see the overview.

Preview
The following table lists the features of Azure SQL Database that are currently in preview:

F EAT URE DETA IL S

Azure Synapse Link for Azure SQL Database Azure Synapse Link for SQL enables near real time analytics
over operational data in Azure SQL Database or SQL Server
2022.

Elastic jobs The elastic jobs feature is the SQL Server Agent replacement
for Azure SQL Database as a PaaS offering.

Elastic queries The elastic queries feature allows for cross-database queries
in Azure SQL Database.

Elastic transactions Elastic transactions allow you to execute transactions


distributed among cloud databases in Azure SQL Database.

Hyperscale short-term retention Retain backups up to 35 days for Hyperscale databases, and
perform a point-in-time restore within the configured
duration.

JavaScript & Python bindings Use JavaScript or Python SQL bindings with Azure Functions.

Ledger The Azure SQL Database ledger feature allows you to


cryptographically attest to other parties, such as auditors or
other business parties, that your data hasn't been tampered
with.

Maintenance window advance notifications Advance notifications are available for databases configured
to use a non-default maintenance window. Advance
notifications for maintenance windows are in public preview
for Azure SQL Database.

Query editor in the Azure portal The query editor in the portal allows you to run queries
against your Azure SQL Database directly from the Azure
portal.

Query Store hints Use query hints to optimize your query execution via the
OPTION clause.
F EAT URE DETA IL S

Reverse migrate from Hyperscale Reverse migration to the General Purpose service tier allows
customers who have recently migrated an existing database
in Azure SQL Database to the Hyperscale service tier to
move back in an emergency, should Hyperscale not meet
their needs. While reverse migration is initiated by a service
tier change, it's essentially a size-of-data move between
different architectures.

SQL Analytics Azure SQL Analytics is an advanced cloud monitoring


solution for monitoring performance of all of your Azure SQL
databases at scale and across multiple subscriptions in a
single view. Azure SQL Analytics collects and visualizes key
performance metrics with built-in intelligence for
performance troubleshooting.

SQL Database emulator The Azure SQL Database emulator provides the ability to
locally validate database and query design together with
client application code in a simple and frictionless model as
part of the application development process.

SQL Database Projects extension An extension to develop databases for Azure SQL Database
with Azure Data Studio and VS Code. A SQL project is a local
representation of SQL objects that comprise the schema for
a single database, such as tables, stored procedures, or
functions.

SQL Insights SQL Insights (preview) is a comprehensive solution for


monitoring any product in the Azure SQL family. SQL
Insights (preview) uses dynamic management views to
expose the data you need to monitor health, diagnose
problems, and tune performance.

Zone redundant configuration for Hyperscale databases The zone redundant configuration feature utilizes Azure
Availability Zones to replicate databases across multiple
physical locations within an Azure region. By selecting zone
redundancy, you can make your Hyperscale databases
resilient to a much larger set of failures, including
catastrophic datacenter outages, without any changes to the
application logic.

General availability (GA)


The following table lists the features of Azure SQL Database that have transitioned from preview to general
availability (GA) within the last 12 months:

F EAT URE GA M O N T H DETA IL S

Named Replicas for Hyperscale June 2022 Named Replicas enable a broad variety
databases of read scale-out scenarios, and easily
implement near-real time hybrid
transactional and analytical processing
(HTAP) solutions.
F EAT URE GA M O N T H DETA IL S

Active geo-replication and Auto- June 2022 Active geo-replication and Auto-
failover groups for Hyperscale failover groups helps to have a turn
databases key business continuity solution for
Azure SQL Hyperscale Database that
lets you perform quick disaster
recovery of databases in case of a
regional disaster or a large scale
outage.

Ledger May 2022 The ledger feature in Azure SQL


Database allows you to
cryptographically attest to other
parties, such as auditors or other
business parties, that your data hasn't
been tampered with.

Change data capture April 2022 Change data capture (CDC) lets you
track all the changes that occur on a
database. Though this feature has
been available for SQL Server for quite
some time, using it with Azure SQL
Database is now generally available.

Zone redundant configuration for April 2022 The zone redundant configuration
General Purpose tier feature utilizes Azure Availability Zones
to replicate databases across multiple
physical locations within an Azure
region. By selecting zone redundancy,
you can make your provisioned and
serverless General Purpose databases
and elastic pools resilient to a much
larger set of failures, including
catastrophic datacenter outages,
without any changes to the application
logic.

Maintenance window March 2022 The maintenance window feature


allows you to configure maintenance
schedule for your Azure SQL Database.
Maintenance window advance
notifications, however, is in preview.

Storage redundancy for Hyperscale March 2022 When creating a Hyperscale database,
databases you can choose your preferred storage
type: read-access geo-redundant
storage (RA-GRS), zone-redundant
storage (ZRS), or locally redundant
storage (LRS) Azure standard storage.
The selected storage redundancy
option will be used for the lifetime of
the database for both data storage
redundancy and backup storage
redundancy.

Azure Active Directory-only November 2021 It's possible to configure your Azure
authentication SQL Database to allow authentication
only from Azure Active Directory.
F EAT URE GA M O N T H DETA IL S

Azure AD service principal September 2021 Azure Active Directory (Azure AD)
supports user creation in Azure SQL
Database on behalf of Azure AD
applications (service principals).

Audit management operations March 2021 Azure SQL audit capabilities enable
you to audit operations done by
Microsoft support engineers when
they need to access your SQL assets
during a support request, enabling
more transparency in your workforce.

Documentation changes
Learn about significant changes to the Azure SQL Database documentation.
June 2022
C H A N GES DETA IL S

Named Replicas for Hyperscale databases GA Named Replicas enable a broad variety of read scale-out
scenarios, and easily implement near-real time hybrid
transactional and analytical processing (HTAP) solutions. This
feature is now generally available. See named replicas to
learn more.

Active geo-replication and Auto-failover groups for Active geo-replication and Auto-failover groups helps to
Hyperscale databases have a turn key business continuity solution for Azure SQL
Hyperscale Database that lets you perform quick disaster
recovery of databases in case of a regional disaster or a large
scale outage.

May 2022
C H A N GES DETA IL S

Ledger GA The ledger feature in SQL Database is now generally


available. Use the ledger feature to cryptographically attest
to other parties, such as auditors or other business parties,
that your data hasn't been tampered with. See Ledger to
learn more.

JavaScript & Python bindings Support for JavaScript and Python SQL bindings for Azure
Functions is currently in preview. See Azure SQL bindings for
Azure Functions to learn more.

Local development experience The Azure SQL Database local development experience is a
combination of tools and procedures that empowers
application developers and database professionals to design,
edit, build/validate, publish, and run database schemas for
databases directly on their workstation using an Azure SQL
Database containerized environment. To learn more, see
Local development experience for Azure SQL Database.
C H A N GES DETA IL S

SQL Database emulator The Azure SQL Database emulator provides the ability to
locally validate database and query design together with
client application code in a simple and frictionless model as
part of the application development process. The SQL
Database emulator is currently in preview. Review SQL
Database emulator to learn more.

SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or Visual
Studio Code. This feature is currently in preview. To learn
more, see SDK-style SQL projects.

Azure Synapse Link for Azure SQL Database Azure Synapse Link enables near real-time analytics over
operational data in SQL Server 2022 and Azure SQL
Database. With a seamless integration between operational
stores and Azure Synapse Analytics dedicated SQL pools,
Azure Synapse Link enables you to run analytics, business
intelligence and machine learning scenarios on your
operational data with minimum impact on source databases
with a new change feed technology. For more information,
see What is Synapse Link for SQL? (Preview).

April 2022
C H A N GES DETA IL S

General Purpose tier Zone redundancy GA Enabling zone redundancy for your provisioned and
serverless General Purpose databases and elastic pools is
now generally available in select regions. To learn more,
including region availability see General Purpose zone
redundancy.

Change data capture GA Change data capture (CDC) lets you track all the changes
that occur on a database. Though this feature has been
available for SQL Server for quite some time, using it with
Azure SQL Database is now generally available. To learn
more, see Change data capture.

March 2022
C H A N GES DETA IL S

GA for maintenance window The maintenance window feature allows you to configure a
maintenance schedule for your Azure SQL Database and
receive advance notifications of maintenance windows.
Maintenance window advance notifications are in public
preview for databases configured to use a non-default
maintenance window.

Hyperscale zone redundant configuration preview It's now possible to create new Hyperscale databases with
zone redundancy to make your databases resilient to a
much larger set of failures. This feature is currently in
preview for the Hyperscale service tier. To learn more, see
Hyperscale zone redundancy.
C H A N GES DETA IL S

Hyperscale storage redundancy GA Choosing your storage redundancy for your databases in
the Hyperscale service tier is now generally available. See
Configure backup storage redundancy to learn more.

February 2022
C H A N GES DETA IL S

Hyperscale reverse migration Reverse migration is now in preview. Reverse migration to


the General Purpose service tier allows customers who have
recently migrated an existing database in Azure SQL
Database to the Hyperscale service tier to move back in an
emergency, should Hyperscale not meet their needs. While
reverse migration is initiated by a service tier change, it's
essentially a size-of-data move between different
architectures. Learn about reverse migration from
Hyperscale.

New Hyperscale ar ticles We have reorganized some existing content into new articles
and added new content for Hyperscale. Learn about
Hyperscale distributed functions architecture, how to
manage a Hyperscale database, and how to create a
Hyperscale database.

Free Azure SQL Database Try Azure SQL Database for free using the Azure free
account. To learn more, review Try SQL Database for free.

2021
C H A N GES DETA IL S

Azure AD-only authentication Restricting authentication to your Azure SQL Database only
to Azure Active Directory users is now generally available. To
learn more, see Azure AD-only authentication.

Split what's new The previously combined What's new article has been split
by product - What's new in SQL Database and What's new in
SQL Managed Instance, making it easier to identify what
features are currently in preview, generally available, and
significant documentation changes. Additionally, the Known
Issues in SQL Managed Instance content has moved to its
own page.

Maintenance Window suppor t for availability zones You can now use the Maintenance Window feature if your
Azure SQL Database is deployed to an availability zone. This
feature is currently in preview.

Azure AD-only authentication It's now possible to restrict authentication to your Azure SQL
Database to Azure Active Directory users only. This feature is
currently in preview. To learn more, see Azure AD-only
authentication.

Quer y store hints It's now possible to use query hints to optimize your query
execution via the OPTION clause. This feature is currently in
preview. To learn more, see Query store hints.
C H A N GES DETA IL S

Change data capture Using change data capture (CDC) with Azure SQL Database
is now in preview. To learn more, see Change data capture.

SQL Database ledger SQL Database ledger is in preview, and introduces the ability
to cryptographically attest to other parties, such as auditors
or other business parties, that your data hasn't been
tampered with. To learn more, see Ledger.

Maintenance window The maintenance window feature allows you to configure a


maintenance schedule for your Azure SQL Database,
currently in preview. To learn more, see maintenance window.

SQL insights SQL insights is a comprehensive solution for monitoring any


product in the Azure SQL family. SQL insights uses dynamic
management views to expose the data you need to monitor
health, diagnose problems, and tune performance. To learn
more, see SQL insights.

Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Try Azure SQL Database free with Azure free
account
7/12/2022 • 6 minutes to read • Edit Online

Azure SQL Database is an intelligent, scalable, relational database service built for the cloud. SQL Database is a
fully managed platform as a service (PaaS) database engine that handles most database management functions
such as upgrading, patching, backups, and monitoring without user involvement.
Using an Azure free account, you can try Azure SQL Database for free for 12 months with the following
monthly limit :
1 S0 database with 10 database transaction units and 250 GB storage
This article shows you how to create and use an Azure SQL Database for free using an Azure free account.

Prerequisites
To try Azure SQL Database for free, you need:
An Azure free account. If you don't have one, create a free account before you begin.

Create a database
This article uses the Azure portal to create a SQL Database with public access. Alternatively, you can create a
SQL Database using PowerShell, the Azure CLI or an ARM template.
To create your database, follow these steps:
1. Sign in to the Azure portal with your Azure free account.
2. Search for and select SQL databases :

Alternatively, you can search for and navigate to Free Ser vices , and then select the Azure SQL
Database tile from the list:
3. Select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the free trial Azure
Subscription .
5. For Resource group , select Create new , enter myResourceGroup, and select OK .
6. For Database name , enter mySampleDatabase.
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver12345, and the portal lets you
know if it's available or not.
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets complexity requirements, and enter it again in the Confirm
password field.
Location : Select a location from the dropdown list.
Select OK .
8. Leave Want to use SQL elastic pool set to No .
9. Under Compute + storage , select Configure database .
10. For the free trial, under Ser vice Tier select Standard (For workloads with typical performance
requirements) . Set DTUs to 10 and Data max size (GB) to 250 , and then select Apply .
11. Leave Backup storage redundancy set to Geo-redundant backup storage
12. Select Next: Networking at the bottom of the page.
13. On the Networking tab, for Connectivity method , select Public endpoint .
14. For Firewall rules , set Allow Azure ser vices and resources to access this ser ver set to Yes and
set Add current client IP address to Yes .
15. Leave Connection policy set to Default .
16. For Encr ypted Connections , leave Minimum TLS version set to TLS 1.2 .
17. Select Next: Security at the bottom of the page.
18. Leave the values unchanged on Security tab.
19. Select Next: Additional settings at the bottom of the page.
20. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there are some tables and data to query and
experiment with, as opposed to an empty blank database.
21. Select Review + create at the bottom of the page.
22. On the Review + create page, after reviewing, select Create .

IMPORTANT
While creating the SQL Database from your Azure free account, you will still see an Estimated cost per month
in the Compute + Storage : Cost Summar y blade and Review + Create tab. But, as long as you are using
your Azure free account, and your free service usage is within monthly limits, you won't be charged for the
service. To view usage information, review Monitor and track free ser vices usage later in this article.

Query the database


Once your database is created, you can use the Quer y editor (preview) in the Azure portal to connect to the
database and query data.
1. In the portal, search for and select SQL databases , and then select your database from the list.
2. On the page for your database, select Quer y editor (preview) in the navigation menu.
3. Enter your server admin login information, and select OK .
4. Enter the following query in the Quer y editor pane.

SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName


FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;

5. Select Run , and then review the query results in the Results pane.

6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.

Monitor and track service usage


You are not charged for the Azure SQL Database included with your Azure free account unless you exceed the
free service limit. To remain within the limit, use the Azure portal to track and monitor your free services usage.
To track usage, follow these steps:
1. In the Azure portal, search for Subscriptions and select the free trial subscription.
2. On the Over view page, scroll down to see the tile Top free ser vices by usage , and then select View
all free ser vices .

3. Locate the meters related to Azure SQL Database to track usage.

The following table describes the values on the track usage page:

VA L UE DESC RIP T IO N

Meter Identifies the unit of measure for the service being


consumed. For example, the meter for Azure SQL Database
is SQL Database, Single Standard, S0 DTUs, which tracks the
number of S0 databases used per day, and has a monthly
limit of 1.

Usage/limit The usage of the meter for the current month, and the limit
for the meter.
VA L UE DESC RIP T IO N

Status The current status of your usage of the service defined by


the meter. The possible values for status are:
Not in use : You haven't used the meter or the usage for
the meter hasn't reached the billing system.
Exceeded on <Date> : You've exceeded the limit for the
meter on <Date>.
Unlikely to Exceed : You're unlikely to exceed the limit for
the meter.
Exceeds on <Date> : You're likely to exceed the limit for
the meter on <Date>.

IMPORTANT
With an Azure free account, you also get $200 in credit to use in 30 days. During this time, any usage of the service
beyond the free monthly amount is deducted from this credit.
At the end of your first 30 days or after you spend your $200 credit (whichever comes first), you'll only pay for what
you use beyond the free monthly amount of services. To keep getting free services after 30 days, move to pay-as-
you-go pricing. If you don't move to pay as you go, you can't purchase Azure services beyond your $200 credit and
eventually your account and services will be disabled.
For more information, see Azure free account FAQ .

Clean up resources
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .

Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Getting started with single databases in Azure SQL
Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


A single database is fully managed platform as a service (PaaS) database as a service (DbaaS) that is ideal
storage engine for the modern cloud-born applications. In this section, you'll learn how to quickly configure and
create a single database in Azure SQL Database.

Quickstart overview
In this section, you'll see an overview of available articles that can help you to quickly get started with single
databases. The following quickstarts enable you to quickly create a single database, configure a server-level
firewall rule, and then import a database into the new single database using a .bacpac file:
Create a single database using the Azure portal.
After creating the database, you would need to secure your database by configuring firewall rules.
If you have an existing database on SQL Server that you want to migrate to Azure SQL Database, you should
install Data Migration Assistant (DMA) that will analyze your databases on SQL Server and find any issue that
could block migration. If you don't find any issue, you can export your database as .bacpac file and import it
using the Azure portal or SqlPackage.

Automating management operations


You can use PowerShell or the Azure CLI to create, configure, and scale your database.
Create and configure a single database using PowerShell or Azure CLI
Update your single database and scale resources using PowerShell or Azure CLI

Migrating to a single database with minimal downtime


These quickstarts enable you to quickly create or import your database to Azure using a .bacpac file. However,
.bacpac and .dacpac files are designed to quickly move databases across different versions of SQL Server and
within Azure SQL, or to implement continuous integration in your DevOps pipeline. However, this method is not
designed for migration of your production databases with minimal downtime, because you would need to stop
adding new data, wait for the export of the source database to a .bacpac file to complete, and then wait for the
import into Azure SQL Database to complete. All of this waiting results in downtime of your application,
especially for large databases. To move your production database, you need a better way to migrate that
guarantees minimal downtime of migration. For this, use the Data Migration Service (DMS) to migrate your
database with the minimal downtime. DMS accomplishes this by incrementally pushing the changes made in
your source database to the single database being restored. This way, you can quickly switch your application
from source to target database with the minimal downtime.

Hands-on learning modules


The following Microsoft Learn modules help you learn for free about Azure SQL Database.
Provision a database in SQL Database to store application data
Develop and configure an ASP.NET application that queries a database in Azure SQL Database
Secure your database in Azure SQL Database

Next steps
Find a high-level list of supported features in Azure SQL Database.
Learn how to make your database more secure.
Find more advanced how-to's in how to use a single database in Azure SQL Database.
Find more sample scripts written in PowerShell and the Azure CLI.
Learn more about the management API that you can use to configure your databases.
Identify the right Azure SQL Database or Azure SQL Managed Instance SKU for your on-premises database.
Quickstart: Create an Azure SQL Database single
database
7/12/2022 • 13 minutes to read • Edit Online

In this quickstart, you create a single database in Azure SQL Database using either the Azure portal, a
PowerShell script, or an Azure CLI script. You then query the database using Quer y editor in the Azure portal.

Prerequisites
An active Azure subscription. If you don't have one, create a free account.
The latest version of either Azure PowerShell or Azure CLI.

Create a single database


This quickstart creates a single database in the serverless compute tier.
Portal
Azure CLI
Azure CLI (sql up)
PowerShell

To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .

3. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
4. For Resource group , select Create new , enter myResourceGroup, and select OK .
5. For Database name , enter mySampleDatabase.
6. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver12345, and the portal lets you
know if it's available or not.
Location : Select a location from the dropdown list.
Authentication method : Select Use SQL authentication .
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Select OK .
7. Leave Want to use SQL elastic pool set to No .
8. Under Compute + storage , select Configure database .
9. This quickstart uses a serverless database, so leave Ser vice tier set to General Purpose (Scalable
compute and storage options) and set Compute tier to Ser verless . Select Apply .

10. Select Next: Networking at the bottom of the page.


11. On the Networking tab, for Connectivity method , select Public endpoint .
12. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
13. Select Next: Security at the bottom of the page.

14. On the Security tab , you have the option to enable Microsoft Defender for SQL. Select Next:
Additional settings at the bottom of the page.
15. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there's some tables and data to query and
experiment with, as opposed to an empty blank database.
16. Select Review + create at the bottom of the page:
17. On the Review + create page, after reviewing, select Create .

Query the database


Once your database is created, you can use the Quer y editor (preview) in the Azure portal to connect to the
database and query data.
1. In the portal, search for and select SQL databases , and then select your database from the list.
2. On the page for your database, select Quer y editor (preview) in the left menu.
3. Enter your server admin login information, and select OK .
4. Enter the following query in the Quer y editor pane.

SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName


FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;

5. Select Run , and then review the query results in the Results pane.

6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.

Clean up resources
Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and
query your database with different methods.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.

Portal
Azure CLI
Azure CLI (sql up)
PowerShell

To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .

Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Want to optimize and save on your cloud spending?
Start analyzing costs with Cost Management
Quickstart: Create a Hyperscale database in Azure
SQL Database
7/12/2022 • 14 minutes to read • Edit Online

In this quickstart, you create a logical server in Azure and a Hyperscale database in Azure SQL Database using
the Azure portal, a PowerShell script, or an Azure CLI script, with the option to create one or more High
Availability (HA) replicas. If you would like to use an existing logical server in Azure, you can also create a
Hyperscale database using Transact-SQL.

Prerequisites
An active Azure subscription. If you don't have one, create a free account.
The latest version of either Azure PowerShell or Azure CLI, if you would like to follow the quickstart
programmatically. Alternately, you can complete the quickstart in the Azure portal.
An existing logical server in Azure is required if you would like to create a Hyperscale database with Transact-
SQL. For this approach, you will need to install SQL Server Management Studio (SSMS), Azure Data Studio,
or the client of your choice to run Transact-SQL commands (sqlcmd, etc.).

Create a Hyperscale database


This quickstart creates a single database in the Hyperscale service tier.
Portal
Azure CLI
PowerShell
Transact-SQL

To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .

3. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
4. For Resource group , select Create new , enter myResourceGroup, and select OK .
5. For Database name , enter mySampleDatabase.
6. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. Enter a name such as mysqlserver12345, and the portal will let you
know if it's available.
Ser ver admin login : Enter azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Select a location from the dropdown list.
Select OK .
7. Under Compute + storage , select Configure database .
8. This quickstart creates a Hyperscale database. For Ser vice tier , select Hyperscale .

9. Under Compute Hardware , select Change configuration . Review the available hardware
configurations and select the most appropriate configuration for your database. For this example, we will
select the Gen5 configuration.
10. Select OK to confirm the hardware generation.
11. Under Save money , review if you qualify to use Azure Hybrid Benefit for this database. If so, select Yes
and then confirm you have the required license.
12. Optionally, adjust the vCores slider if you would like to increase the number of vCores for your database.
For this example, we will select 2 vCores.
13. Adjust the High-Availability Secondar y Replicas slider to create one High Availability (HA) replica.
14. Select Apply .
15. Carefully consider the configuration option for Backup storage redundancy when creating a
Hyperscale database. Storage redundancy can only be specified during the database creation process for
Hyperscale databases. You may choose locally redundant (preview), zone-redundant (preview), or geo-
redundant storage. The selected storage redundancy option will be used for the lifetime of the database
for both data storage redundancy and backup storage redundancy. Existing databases can migrate to
different storage redundancy using database copy or point in time restore.
16. Select Next: Networking at the bottom of the page.
17. On the Networking tab, for Connectivity method , select Public endpoint .
18. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
19. Select Next: Security at the bottom of the page.
20. Optionally, enable Microsoft Defender for SQL.
21. Select Next: Additional settings at the bottom of the page.
22. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
This creates an AdventureWorksLT sample database so there's some tables and data to query and
experiment with, as opposed to an empty blank database.
23. Select Review + create at the bottom of the page:
24. On the Review + create page, after reviewing, select Create .

Query the database


Once your database is created, you can use the Quer y editor (preview) in the Azure portal to connect to the
database and query data. If you prefer, you can alternately query the database by connecting with Azure Data
Studio, SQL Server Management Studio (SSMS), or the client of your choice to run Transact-SQL commands
(sqlcmd, etc.).
1. In the portal, search for and select SQL databases , and then select your database from the list.
2. On the page for your database, select Quer y editor (preview) in the left menu.
3. Enter your server admin login information, and select OK .
4. If you created your Hyperscale database from the AdventureWorksLT sample database, enter the
following query in the Quer y editor pane.

SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName


FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;

If you created an empty database using the Transact-SQL sample code, enter another example query in
the Quer y editor pane, such as the following:

CREATE TABLE dbo.TestTable(


TestTableID int IDENTITY(1,1) NOT NULL,
TestTime datetime NOT NULL,
TestMessage nvarchar(4000) NOT NULL,
CONSTRAINT PK_TestTable_TestTableID PRIMARY KEY CLUSTERED (TestTableID ASC)
)
GO

ALTER TABLE dbo.TestTable ADD CONSTRAINT DF_TestTable_TestTime DEFAULT (getdate()) FOR TestTime
GO

INSERT dbo.TestTable (TestMessage)


VALUES (N'This is a test');
GO

SELECT TestTableID, TestTime, TestMessage


FROM dbo.TestTable;
GO

5. Select Run , and then review the query results in the Results pane.
6. Close the Quer y editor page, and select OK when prompted to discard your unsaved edits.

Clean up resources
Keep the resource group, server, and single database to go on to the next steps, and learn how to connect and
query your database with different methods.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.
Portal
Azure CLI
PowerShell
Transact-SQL

To delete myResourceGroup and all its resources using the Azure portal:
1. In the portal, search for and select Resource groups , and then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup, and then select Delete .

Next steps
Connect and query your database using different tools and languages:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Learn more about Hyperscale databases in the following articles:
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Hyperscale secondary replicas
Azure SQL Database Hyperscale named replicas FAQ
Quickstart: Create a single database in Azure SQL
Database using Bicep
7/12/2022 • 2 minutes to read • Edit Online

Creating a single database is the quickest and simplest option for creating a database in Azure SQL Database.
This quickstart shows you how to create a single database using Bicep.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.

Prerequisites
If you don't have an Azure subscription, create a free account.

Review the Bicep file


A single database has a defined set of compute, memory, IO, and storage resources using one of two purchasing
models. When you create a single database, you also define a server to manage it and place it within Azure
resource group in a specified region.
The Bicep file used in this quickstart is from Azure Quickstart Templates.
@description('The name of the SQL logical server.')
param serverName string = uniqueString('sql', resourceGroup().id)

@description('The name of the SQL Database.')


param sqlDBName string = 'SampleDB'

@description('Location for all resources.')


param location string = resourceGroup().location

@description('The administrator username of the SQL logical server.')


param administratorLogin string

@description('The administrator password of the SQL logical server.')


@secure()
param administratorLoginPassword string

resource sqlServer 'Microsoft.Sql/servers@2021-08-01-preview' = {


name: serverName
location: location
properties: {
administratorLogin: administratorLogin
administratorLoginPassword: administratorLoginPassword
}
}

resource sqlDB 'Microsoft.Sql/servers/databases@2021-08-01-preview' = {


parent: sqlServer
name: sqlDBName
location: location
sku: {
name: 'Standard'
tier: 'Standard'
}
}

The following resources are defined in the Bicep file:


Microsoft.Sql/ser vers
Microsoft.Sql/ser vers/databases

Deploy the Bicep file


1. Save the Bicep file as main.bicep to your local computer.
2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.

CLI
PowerShell

az group create --name exampleRG --location eastus


az deployment group create --resource-group exampleRG --template-file main.bicep --parameters
administratorLogin=<admin-login>

NOTE
Replace <admin-login> with the administrator username of the SQL logical server. You'll be prompted to enter
administratorLoginPassword .

When the deployment finishes, you should see a message indicating the deployment succeeded.
Review deployed resources
Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
CLI
PowerShell

az resource list --resource-group exampleRG

Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and
its resources.

CLI
PowerShell

az group delete --name exampleRG

Next steps
Create a server-level firewall rule to connect to the single database from on-premises or remote tools. For
more information, see Create a server-level firewall rule.
After you create a server-level firewall rule, connect and query your database using several different tools
and languages.
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
To create a single database using the Azure CLI, see Azure CLI samples.
To create a single database using Azure PowerShell, see Azure PowerShell samples.
To learn how to create Bicep files, see Create Bicep files with Visual Studio Code.
Quickstart: Create a single database in Azure SQL
Database using an ARM template
7/12/2022 • 2 minutes to read • Edit Online

Creating a single database is the quickest and simplest option for creating a database in Azure SQL Database.
This quickstart shows you how to create a single database using an Azure Resource Manager template (ARM
template).
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.

Prerequisites
If you don't have an Azure subscription, create a free account.

Review the template


A single database has a defined set of compute, memory, IO, and storage resources using one of two purchasing
models. When you create a single database, you also define a server to manage it and place it within Azure
resource group in a specified region.
The template used in this quickstart is from Azure Quickstart Templates.

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.5.6.12127",
"templateHash": "17606057535442789180"
}
},
"parameters": {
"serverName": {
"type": "string",
"defaultValue": "[uniqueString('sql', resourceGroup().id)]",
"metadata": {
"description": "The name of the SQL logical server."
}
},
"sqlDBName": {
"type": "string",
"defaultValue": "SampleDB",
"metadata": {
"description": "The name of the SQL Database."
}
},
"location": {
"type": "string",
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"administratorLogin": {
"type": "string",
"metadata": {
"description": "The administrator username of the SQL logical server."
}
},
"administratorLoginPassword": {
"type": "secureString",
"metadata": {
"description": "The administrator password of the SQL logical server."
}
}
},
"resources": [
{
"type": "Microsoft.Sql/servers",
"apiVersion": "2021-08-01-preview",
"name": "[parameters('serverName')]",
"location": "[parameters('location')]",
"properties": {
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]"
}
},
{
"type": "Microsoft.Sql/servers/databases",
"apiVersion": "2021-08-01-preview",
"name": "[format('{0}/{1}', parameters('serverName'), parameters('sqlDBName'))]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard",
"tier": "Standard"
},
"dependsOn": [
"[resourceId('Microsoft.Sql/servers', parameters('serverName'))]"
]
}
]
}

These resources are defined in the template:


Microsoft.Sql/ser vers
Microsoft.Sql/ser vers/databases
More Azure SQL Database template samples can be found in Azure Quickstart Templates.

Deploy the template


Select Tr y it from the following PowerShell code block to open Azure Cloud Shell.
$projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
$location = Read-Host -Prompt "Enter an Azure location (i.e. centralus)"
$adminUser = Read-Host -Prompt "Enter the SQL server administrator username"
$adminPassword = Read-Host -Prompt "Enter the SQl server administrator password" -AsSecureString

$resourceGroupName = "${projectName}rg"

New-AzResourceGroup -Name $resourceGroupName -Location $location


New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri
"https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.sql/sql-
database/azuredeploy.json" -administratorLogin $adminUser -administratorLoginPassword $adminPassword

Read-Host -Prompt "Press [ENTER] to continue ..."

Validate the deployment


To query the database, see Query the database.

Clean up resources
Keep this resource group, server, and single database if you want to go to the Next steps. The next steps show
you how to connect and query your database using different methods.
To delete the resource group:

$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"


Remove-AzResourceGroup -Name $resourceGroupName

Next steps
Create a server-level firewall rule to connect to the single database from on-premises or remote tools. For
more information, see Create a server-level firewall rule.
After you create a server-level firewall rule, connect and query your database using several different tools
and languages.
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
To create a single database using the Azure CLI, see Azure CLI samples.
To create a single database using Azure PowerShell, see Azure PowerShell samples.
To learn how to create ARM templates, see Create your first template.
Quickstart: Create a database in Azure SQL
Database with ledger enabled
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this quickstart, you create a ledger database in Azure SQL Database and configure automatic digest storage
by using the Azure portal.

Prerequisite
You need an active Azure subscription. If you don't have one, create a free account.

Create a ledger database and configure digest storage


Create a single ledger database in the serverless compute tier, and configure uploading ledger digests to an
Azure Storage account.
Portal
The Azure CLI
PowerShell

To create a single database in the Azure portal, this quickstart starts at the Azure SQL page.
1. Browse to the Select SQL Deployment option page.
2. Under SQL databases , leave Resource type set to Single database , and select Create .

3. On the Basics tab of the Create SQL Database form, under Project details , select the Azure
subscription you want to use.
4. For Resource group , select Create new , enter myResourceGroup , and select OK .
5. For Database name , enter demo .
6. For Ser ver , select Create new . Fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlser ver , and add some characters for uniqueness. We can't provide an
exact server name to use because server names must be globally unique for all servers in Azure, not
just unique within a subscription. Enter something like mysqlser ver12345 , and the portal lets you
know if it's available or not.
Ser ver admin login : Enter azureuser .
Password : Enter a password that meets requirements. Enter it again in the Confirm password box.
Location : Select a location from the dropdown list.
Allow Azure ser vices to access this ser ver : Select this option to enable access to digest storage.
Select OK .
7. Leave Want to use SQL elastic pool set to No .
8. Under Compute + storage , select Configure database .
9. This quickstart uses a serverless database, so select Ser verless , and then select Apply .

10. On the Networking tab, for Connectivity method , select Public endpoint .
11. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
12. Select Next: Security at the bottom of the page.
13. On the Security tab, in the Ledger section, select the Configure ledger option.

14. On the Configure ledger pane, in the Ledger section, select the Enable for all future tables in this
database checkbox. This setting ensures that all future tables in the database will be ledger tables. For
this reason, all data in the database will show any evidence of tampering. By default, new tables will be
created as updatable ledger tables, even if you don't specify LEDGER = ON in CREATE TABLE. You can also
leave this option unselected. You're then required to enable ledger functionality on a per-table basis when
you create new tables by using Transact-SQL.
15. In the Digest Storage section, Enable automatic digest storage is automatically selected. Then, a
new Azure Storage account and container where your digests are stored is created.
16. Select Apply .

17. Select Review + create at the bottom of the page.


18. On the Review + create page, after you review, select Create .

Clean up resources
Keep the resource group, server, and single database for the next steps. You'll learn how to use the ledger feature
of your database with different methods.
When you're finished using these resources, delete the resource group you created. This action also deletes the
server and single database within it, and the storage account.

NOTE
If you've configured and locked a time-based retention policy on the container, you need to wait until the specified
immutability period ends before you can delete the storage account.

Portal
The Azure CLI
PowerShell

To delete myResourceGroup and all its resources by using the Azure portal:
1. In the portal, search for and select Resource groups . Then select myResourceGroup from the list.
2. On the resource group page, select Delete resource group .
3. Under Type the resource group name , enter myResourceGroup , and then select Delete .

Next steps
Connect and query your database by using different tools and languages:
Create and use updatable ledger tables
Create and use append-only ledger tables
Create an Azure SQL Database server with a user-
assigned managed identity
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

NOTE
If you're looking for a guide on Azure SQL Managed Instance, see Create an Azure SQL Managed Instance with a user-
assigned managed identity.

This how-to guide outlines the steps to create a logical server for Azure SQL Database with a user-assigned
managed identity. For more information on the benefits of using a user-assigned managed identity for the
server identity in Azure SQL Database, see User-assigned managed identity in Azure AD for Azure SQL.

Prerequisites
To provision a SQL Database server with a user-assigned managed identity, the SQL Server Contributor role
(or a role with greater permissions), along with an Azure RBAC role containing the following action is
required:
Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the Managed
Identity Operator has this action.
Create a user-assigned managed identity and assign it the necessary permission to be a server or managed
instance identity. For more information, see Manage user-assigned managed identities and user-assigned
managed identity permissions for Azure SQL.
Az.Sql module 3.4 or higher is required when using PowerShell for user-assigned managed identities.
The Azure CLI 2.26.0 or higher is required to use the Azure CLI with user-assigned managed identities.
For a list of limitations and known issues with using user-assigned managed identity, see User-assigned
managed identity in Azure AD for Azure SQL

Create server configured with a user-assigned managed identity


The following steps outline the process of creating a new Azure SQL Database logical server and a new database
with a user-assigned managed identity assigned.

NOTE
Multiple user-assigned managed identities can be added to the server, but only one identity can be the primary identity
at any given time. In this example, the system-assigned managed identity is disabled, but it can be enabled as well.

Portal
The Azure CLI
PowerShell
REST API
ARM Template

1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter your desired database name.
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers in
Azure, not just unique within a subscription.
Ser ver admin login : Enter an admin login name, for example: azureuser .
Password : Enter a password that meets the password requirements, and enter it again in the
Confirm password field.
Location : Select a location from the dropdown list
8. Select Next: Networking at the bottom of the page.
9. On the Networking tab, for Connectivity method , select Public endpoint .
10. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
11. Select Next: Security at the bottom of the page.
12. On the Security tab, under Identity , select Configure Identities .

13. On the Identity blade, under User assigned managed identity , select Add . Select the desired
Subscription and then under User assigned managed identities select the desired user assigned
managed identity from the selected subscription. Then select the Select button.
14. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.
NOTE
If the system-assigned managed identity is the primary identity, the Primar y identity field must be empty.

15. Select Apply


16. Select Review + create at the bottom of the page
17. On the Review + create page, after reviewing, select Create .

See also
User-assigned managed identity in Azure AD for Azure SQL
Create an Azure SQL Managed Instance with a user-assigned managed identity.
Quickstart: Create a server-level firewall rule in
Azure portal
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This quickstart describes how to create a server-level firewall rule in Azure SQL Database. Firewall rules can give
access to logical SQL servers, single databases, and elastic pools and their databases. Firewall rules are also
needed to connect on-premises and other Azure resources to databases. Server-level firewall rules do not apply
to Azure SQL Managed Instance.

Prerequisites
We will use the resources developed in Create a single database using the Azure portal as a starting point for
this tutorial.

Sign in to Azure portal


Sign in to Azure portal.

Create a server-level IP-based firewall rule


Azure SQL Database creates a firewall at the server level for single and pooled databases. This firewall blocks
connections from IP addresses that do not have permission. To connect to an Azure SQL database from an IP
address outside of Azure, you need to create a firewall rule. You can use rules to open a firewall for a specific IP
address or for a range of IP addresses. For more information about server-level and database-level firewall
rules, see Server-level and database-level IP-based firewall rules.

NOTE
Azure SQL Database communicates over port 1433. When you connect from within a corporate network, outbound
traffic over port 1433 may not be permitted by your network firewall. This means your IT department needs to open port
1433 for you to connect to your server.

IMPORTANT
A firewall rule of 0.0.0.0 enables all Azure services to pass through the server-level firewall rule and attempt to connect to
a database through the server.

We'll use the following steps to create a server-level IP-based, firewall rule for a specific, client IP address. This
enables external connectivity for that IP address through the Azure SQL Database firewall.
1. After the database deployment completes, select SQL databases from the left-hand menu and then
select mySampleDatabase on the SQL databases page. The overview page for your database opens. It
displays the fully qualified server name (such as mynewser ver-20170824.database.windows.net )
and provides options for further configuration.
2. Copy the fully qualified server name. You will use it when you connect to your server and its databases in
other quickstarts.
3. Select Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.

4. Choose Add client IP on the toolbar to add your current IP address to a new, server-level, firewall rule.
This rule can open Port 1433 for a single IP address or for a range of IP addresses.

IMPORTANT
By default, access through the Azure SQL Database firewall is disabled for all Azure services. Choose ON on this
page to enable access for all Azure services.

5. Select Save . Port 1433 is now open on the server and a server-level IP-based, firewall rule is created for
your current IP address.
6. Close the Firewall settings page.
Open SQL Server Management Studio or another tool of your choice. Use the server admin account you created
earlier to connect to the server and its databases from your IP address.
7. Save the resources from this quickstart to complete additional SQL database tutorials.

Clean up resources
Use the following steps to delete the resources that you created during this quickstart:
1. From the left-hand menu in Azure portal, select Resource groups and then select myResourceGroup .
2. On your resource group page, select Delete , type myResourceGroup in the text box, and then select
Delete .

Next steps
Learn how to connect and query your database using your favorite tools or languages, including:
Connect and query using SQL Server Management Studio
Connect and query using Azure Data Studio
Learn how to design your first database, create tables, and insert data, see one of these tutorials:
Design your first single database in Azure SQL Database using SSMS
Design a single database in Azure SQL Database and connect with C# and ADO.NET
Quickstart: Create a local development environment
for Azure SQL Database
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Azure SQL Database local development experience provides a way to design, edit, build/validate, publish
and run database schemas in a local Azure SQL Database emulator. With the Database Projects feature,
developers are able to easily publish Database Projects to the Azure SQL Database public service from their
local environment, as well as manage the entire lifecycle of their databases (for example, manage schema drifts
and such). This Quickstart teaches you the entire workflow that leverages the Azure SQL Database local
development experience.

Prerequisites
To complete this Quickstart, you must first Set up a local development environment for Azure SQL Database.

Create a blank project


To get started, either create a blank Database Project, or open an existing project. The steps in this section help
you create a new blank project, but you can open an existing project by going to the Projects view or by
searching for Database Projects: Open Existing in the command palette. start from an existing database by
selecting Create Project from Database from the command palette or database context menu. Finally, you
can start from an OpenAPI/Swagger spec by using the Database Projects: Generate SQL Project from
OpenAPI/Swagger spec command in the command palette.
The steps for creating a new project using Visual Studio Code, or Azure Data Studio are the same. To create a
blank project, follow these steps:
1. Open your choice of developer tool, either Azure Data Studio, or Visual Studio Code.
2. Select Projects and then choose to create a new Database Project. Alternatively, search for Database
Projects: New in the command palette.
3. Choose SQL Database as your project type.

4. Provide a name for the new SQL Database Project.


5. Select the SDK-style SQL Database Project project. (The SDK-style SQL project is recommended for being
more concise and manageable when working with multiple developers on a team's repository.)

6. To set the target platform for your project, right-click the Database Project name and choose Change
Target Platform . Select Azure SQL Database as the target platform for your project.

Setting your target platform provides editing and build time support for your SQL Database Project
objects and scripts. After selecting your target platform, Visual Studio Code highlights syntax issues or
indicates the select platform is using unsupported features.
Optionally, SQL Database Project files can be put under source control together with your application
projects.
7. Add objects to your Database Project. You can create or alter database objects such as tables, views,
stored procedures and scripts. For example, right-click the Database Project name and select Add
Table to add a table.
8. Build your Database Project to validate that it will work against the Azure SQL Database platform. To build
your project, right-click the Database Project name and select Build .

9. Once your Database Project is ready to be tested, publish it to a target. To begin the publishing process,
right-click on the name of your Database Project and select Publish .
10. When publishing, you can choose to publish to either a new or existing server. In this example, we choose
Publish to a new Azure SQL Database emulator .

11. When publishing to a new Azure SQL Database emulator, you are prompted to choose between Lite and
Full images. The Lite image has compatibility with most Azure SQL Database capabilities and is a
lightweight image that takes less to download and instantiate. The Full image gives you access to
advanced features like in-memory optimized tables, geo-spatial data types and more, but requires more
resources.

You can create as many local instances as necessary based on available resources, and manage their
lifecycle through the Visual Studio Code Docker Extension or CLI commands.
12. Once instances of your Database Projects are running, you can connect from the Visual Studio Code
mssql extension and test your scripts and queries, like any regular database in Azure SQL Database.

13. Rebuild and deploy your Database project to one of the containerized instances running on your local
machine with each iteration of adding or modifying objects in your Database Project, until it’s ready.

14. The final step of the Database Project lifecycle is to publish the finished artifact to a new or existing
database in Azure SQL Database using the mssql extension. Right-click the Database Project name and
choose to Publish . Then select the destination where you want to publish your project, such as a new or
existing logical server in Azure.
Next steps
Learn more about the local development experience for Azure SQL Database:
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Use GitHub Actions to connect to Azure SQL
Database
7/12/2022 • 6 minutes to read • Edit Online

Get started with GitHub Actions by using a workflow to deploy database updates to Azure SQL Database.

Prerequisites
You will need:
An Azure account with an active subscription. Create an account for free.
A GitHub repository with a dacpac package ( Database.dacpac ). If you don't have a GitHub account, sign up
for free.
An Azure SQL Database.
Quickstart: Create an Azure SQL Database single database
How to create a dacpac package from your existing SQL Server Database

Workflow file overview


A GitHub Actions workflow is defined by a YAML (.yml) file in the /.github/workflows/ path in your repository.
This definition contains the various steps and parameters that make up the workflow.
The file has two sections:

SEC T IO N TA SK S

Authentication 1.1. Generate deployment credentials.

Deploy 1. Deploy the database.

Generate deployment credentials


Service principal
OpenID Connect

You can create a service principal with the az ad sp create-for-rbac command in the Azure CLI. Run this
command with Azure Cloud Shell in the Azure portal or by selecting the Tr y it button.
Replace the placeholders server-name with the name of your SQL server hosted on Azure. Replace the
subscription-id and resource-group with the subscription ID and resource group connected to your SQL
server.

az ad sp create-for-rbac --name {server-name} --role contributor


--scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group}
--sdk-auth

The output is a JSON object with the role assignment credentials that provide access to your database similar to
this example. Copy your output JSON object for later.
{
"clientId": "<GUID>",
"clientSecret": "<GUID>",
"subscriptionId": "<GUID>",
"tenantId": "<GUID>",
(...)
}

IMPORTANT
It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific server
and not the entire resource group.

Copy the SQL connection string


In the Azure portal, go to your Azure SQL Database and open Settings > Connection strings . Copy the
ADO.NET connection string. Replace the placeholder values for your_database and your_password . The
connection string will look similar to this output.

Server=tcp:my-sql-server.database.windows.net,1433;Initial Catalog={your-database};Persist Security


Info=False;User ID={admin-name};Password={your-
password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;

You'll use the connection string as a GitHub secret.

Configure the GitHub secrets


Service principal
OpenID Connect

1. In GitHub, browse your repository.


2. Select Settings > Secrets > New secret .
3. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret
the name AZURE_CREDENTIALS .
When you configure the workflow file later, you use the secret for the input creds of the Azure Login
action. For example:

- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

4. Select New secret again.


5. Paste the connection string value into the secret's value field. Give the secret the name
AZURE_SQL_CONNECTION_STRING .

Add your workflow


1. Go to Actions for your GitHub repository.
2. Select Set up your workflow yourself .
3. Delete everything after the on: section of your workflow file. For example, your remaining workflow
may look like this.

name: CI

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

4. Rename your workflow SQL for GitHub Actions and add the checkout and login actions. These actions
will check out your site code and authenticate with Azure using the AZURE_CREDENTIALS GitHub secret you
created earlier.

Service principal
OpenID Connect

name: SQL for GitHub Actions

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v1
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

1. Use the Azure SQL Deploy action to connect to your SQL instance. Replace SQL_SERVER_NAME with the
name of your server. You should have a dacpac package ( Database.dacpac ) at the root level of your
repository.

- uses: azure/sql-action@v1
with:
server-name: SQL_SERVER_NAME
connection-string: ${{secrets.AZURE_SQL_CONNECTION_STRING }}
dacpac-package: './Database.dacpac'

2. Complete your workflow by adding an action to logout of Azure. Here's the completed workflow. The file
will appear in the .github/workflows folder of your repository.

Service principal
OpenID Connect
name: SQL for GitHub Actions

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@v1
- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

- uses: azure/sql-action@v1
with:
server-name: SQL_SERVER_NAME
connection-string: ${{secrets.AZURE_SQL_CONNECTION_STRING }}
dacpac-package: './Database.dacpac'

# Azure logout
- name: logout
run: |
az logout

Review your deployment


1. Go to Actions for your GitHub repository.
2. Open the first result to see detailed logs of your workflow's run.

Clean up resources
When your Azure SQL database and repository are no longer needed, clean up the resources you deployed by
deleting the resource group and your GitHub repository.

Next steps
Learn about Azure and GitHub integration
Tutorial: Design a relational database in Azure SQL
Database using SSMS
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database is a relational database-as-a-service (DBaaS) in the Microsoft Cloud (Azure). In this tutorial,
you learn how to use the Azure portal and SQL Server Management Studio (SSMS) to:
Create a database using the Azure portal*
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with SSMS
Create tables with SSMS
Bulk load data with BCP
Query data with SSMS
*If you don't have an Azure subscription, create a free account before you begin.

TIP
The following Microsoft Learn module helps you learn for free how to Develop and configure an ASP.NET application that
queries an Azure SQL Database, including the creation of a simple database.

NOTE
For the purpose of this tutorial, we are using Azure SQL Database. You could also use a pooled database in an elastic pool
or a SQL Managed Instance. For connectivity to a SQL Managed Instance, see these SQL Managed Instance quickstarts:
Quickstart: Configure Azure VM to connect to an Azure SQL Managed Instance and Quickstart: Configure a point-to-site
connection to an Azure SQL Managed Instance from on-premises.

Prerequisites
To complete this tutorial, make sure you've installed:
SQL Server Management Studio (latest version)
BCP and SQLCMD (latest version)

Sign in to the Azure portal


Sign in to the Azure portal.

Create a blank database in Azure SQL Database


A database in Azure SQL Database is created with a defined set of compute and storage resources. The database
is created within an Azure resource group and is managed using an logical SQL server.
Follow these steps to create a blank database.
1. On the Azure portal menu or from the Home page, select Create a resource .
2. On the New page, select Databases in the Azure Marketplace section, and then click SQL Database in
the Featured section.

3. Fill out the SQL Database form with the following information, as shown on the preceding image:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Database name yourDatabase For valid database names, see


Database identifiers.

Subscription yourSubscription For details about your subscriptions,


see Subscriptions.

Resource group yourResourceGroup For valid resource group names, see


Naming rules and restrictions.

Select source Blank database Specifies that a blank database


should be created.

4. Click Ser ver to use an existing server or create and configure a new server. Either select an existing
server or click Create a new ser ver and fill out the New ser ver form with the following information:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser ver name Any globally unique name For valid server names, see Naming
rules and restrictions.

Ser ver admin login Any valid name For valid login names, see Database
identifiers.

Password Any valid password Your password must have at least


eight characters and must use
characters from three of the
following categories: upper case
characters, lower case characters,
numbers, and non-alphanumeric
characters.

Location Any valid location For information about regions, see


Azure Regions.

5. Click Select .
6. Click Pricing tier to specify the service tier, the number of DTUs or vCores, and the amount of storage.
You may explore the options for the number of DTUs/vCores and storage that is available to you for each
service tier.
After selecting the service tier, the number of DTUs or vCores, and the amount of storage, click Apply .
7. Enter a Collation for the blank database (for this tutorial, use the default value). For more information
about collations, see Collations
8. Now that you've completed the SQL Database form, click Create to provision the database. This step
may take a few minutes.
9. On the toolbar, click Notifications to monitor the deployment process.
Create a server-level IP firewall rule
Azure SQL Database creates an IP firewall at the server-level. This firewall prevents external applications and
tools from connecting to the server and any databases on the server unless a firewall rule allows their IP
through the firewall. To enable external connectivity to your database, you must first add an IP firewall rule for
your IP address (or IP address range). Follow these steps to create a server-level IP firewall rule.

IMPORTANT
Azure SQL Database communicates over port 1433. If you are trying to connect to this service from within a corporate
network, outbound traffic over port 1433 may not be allowed by your network's firewall. If so, you cannot connect to
your database unless your administrator opens port 1433.

1. After the deployment completes, select SQL databases from the Azure portal menu or search for and
select SQL databases from any page.
2. Select yourDatabase on the SQL databases page. The overview page for your database opens, showing
you the fully qualified Ser ver name (such as contosodatabaseserver01.database.windows.net ) and
provides options for further configuration.

3. Copy this fully qualified server name for use to connect to your server and databases from SQL Server
Management Studio.
4. Click Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
5. Click Add client IP on the toolbar to add your current IP address to a new IP firewall rule. An IP firewall
rule can open port 1433 for a single IP address or a range of IP addresses.
6. Click Save . A server-level IP firewall rule is created for your current IP address opening port 1433 on the
server.
7. Click OK and then close the Firewall settings page.
Your IP address can now pass through the IP firewall. You can now connect to your database using SQL Server
Management Studio or another tool of your choice. Be sure to use the server admin account you created
previously.

IMPORTANT
By default, access through the SQL Database IP firewall is enabled for all Azure services. Click OFF on this page to disable
for all Azure services.

Connect to the database


Use SQL Server Management Studio to establish a connection to your database.
1. Open SQL Server Management Studio.
2. In the Connect to Ser ver dialog box, enter the following information:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser ver type Database engine This value is required.

Ser ver name The fully qualified server name For example,
yourserver.database.windows.net.

Authentication SQL Server Authentication SQL Authentication is the only


authentication type that we've
configured in this tutorial.

Login The server admin account The account that you specified when
you created the server.

Password The password for your server admin The password that you specified
account when you created the server.
3. Click Options in the Connect to ser ver dialog box. In the Connect to database section, enter
yourDatabase to connect to this database.

4. Click Connect . The Object Explorer window opens in SSMS.


5. In Object Explorer , expand Databases and then expand yourDatabase to view the objects in the sample
database.
Create tables in your database
Create a database schema with four tables that model a student management system for universities using
Transact-SQL:
Person
Course
Student
Credit
The following diagram shows how these tables are related to each other. Some of these tables reference
columns in other tables. For example, the Student table references the PersonId column of the Person table.
Study the diagram to understand how the tables in this tutorial are related to one another. For an in-depth look
at how to create effective database tables, see Create effective database tables. For information about choosing
data types, see Data types.

NOTE
You can also use the table designer in SQL Server Management Studio to create and design your tables.
1. In Object Explorer , right-click yourDatabase and select New Quer y . A blank query window opens that
is connected to your database.
2. In the query window, execute the following query to create four tables in your database:

-- Create Person table


CREATE TABLE Person
(
PersonId INT IDENTITY PRIMARY KEY,
FirstName NVARCHAR(128) NOT NULL,
MiddelInitial NVARCHAR(10),
LastName NVARCHAR(128) NOT NULL,
DateOfBirth DATE NOT NULL
)

-- Create Student table


CREATE TABLE Student
(
StudentId INT IDENTITY PRIMARY KEY,
PersonId INT REFERENCES Person (PersonId),
Email NVARCHAR(256)
)

-- Create Course table


CREATE TABLE Course
(
CourseId INT IDENTITY PRIMARY KEY,
Name NVARCHAR(50) NOT NULL,
Teacher NVARCHAR(256) NOT NULL
)

-- Create Credit table


CREATE TABLE Credit
(
StudentId INT REFERENCES Student (StudentId),
CourseId INT REFERENCES Course (CourseId),
Grade DECIMAL(5,2) CHECK (Grade <= 100.00),
Attempt TINYINT,
CONSTRAINT [UQ_studentgrades] UNIQUE CLUSTERED
(
StudentId, CourseId, Grade, Attempt
)
)
3. Expand the Tables node under yourDatabase in the Object Explorer to see the tables you created.

Load data into the tables


1. Create a folder called sampleData in your Downloads folder to store sample data for your database.
2. Right-click the following links and save them into the sampleData folder.
SampleCourseData
SamplePersonData
SampleStudentData
SampleCreditData
3. Open a command prompt window and navigate to the sampleData folder.
4. Execute the following commands to insert sample data into the tables replacing the values for server,
database, user, and password with the values for your environment.

bcp Course in SampleCourseData -S <server>.database.windows.net -d <database> -U <user> -P <password>


-q -c -t ","
bcp Person in SamplePersonData -S <server>.database.windows.net -d <database> -U <user> -P <password>
-q -c -t ","
bcp Student in SampleStudentData -S <server>.database.windows.net -d <database> -U <user> -P
<password> -q -c -t ","
bcp Credit in SampleCreditData -S <server>.database.windows.net -d <database> -U <user> -P <password>
-q -c -t ","

You have now loaded sample data into the tables you created earlier.

Query data
Execute the following queries to retrieve information from the database tables. See Write SQL queries to learn
more about writing SQL queries. The first query joins all four tables to find the students taught by 'Dominick
Pope' who have a grade higher than 75%. The second query joins all four tables and finds the courses in which
'Noe Coleman' has ever enrolled.
1. In a SQL Server Management Studio query window, execute the following query:

-- Find the students taught by Dominick Pope who have a grade higher than 75%
SELECT person.FirstName, person.LastName, course.Name, credit.Grade
FROM Person AS person
INNER JOIN Student AS student ON person.PersonId = student.PersonId
INNER JOIN Credit AS credit ON student.StudentId = credit.StudentId
INNER JOIN Course AS course ON credit.CourseId = course.courseId
WHERE course.Teacher = 'Dominick Pope'
AND Grade > 75

2. In a query window, execute the following query:

-- Find all the courses in which Noe Coleman has ever enrolled
SELECT course.Name, course.Teacher, credit.Grade
FROM Course AS course
INNER JOIN Credit AS credit ON credit.CourseId = course.CourseId
INNER JOIN Student AS student ON student.StudentId = credit.StudentId
INNER JOIN Person AS person ON person.PersonId = student.PersonId
WHERE person.FirstName = 'Noe'
AND person.LastName = 'Coleman'

Next steps
In this tutorial, you learned many basic database tasks. You learned how to:
Create a database using the Azure portal*
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with SSMS
Create tables with SSMS
Bulk load data with BCP
Query data with SSMS
Advance to the next tutorial to learn about designing a database using Visual Studio and C#.
Design a relational database within Azure SQL Database C# and ADO.NET
Tutorial: Design a relational database in Azure SQL
Database C# and ADO.NET
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database is a relational database-as-a-service (DBaaS) in the Microsoft Cloud (Azure). In this tutorial,
you learn how to use the Azure portal and ADO.NET with Visual Studio to:
Create a database using the Azure portal
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with ADO.NET and Visual Studio
Create tables with ADO.NET
Insert, update, and delete data with ADO.NET
Query data ADO.NET
*If you don't have an Azure subscription, create a free account before you begin.

TIP
The following Microsoft Learn module helps you learn for free how to Develop and configure an ASP.NET application that
queries an Azure SQL Database, including the creation of a simple database.

Prerequisites
An installation of Visual Studio 2019 or later.

Create a blank database in Azure SQL Database


A database in Azure SQL Database is created with a defined set of compute and storage resources. The database
is created within an Azure resource group and is managed using an logical SQL server.
Follow these steps to create a blank database.
1. Click Create a resource in the upper left-hand corner of the Azure portal.
2. On the New page, select Databases in the Azure Marketplace section, and then click SQL Database in
the Featured section.
3. Fill out the SQL Database form with the following information, as shown on the preceding image:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Database name yourDatabase For valid database names, see


Database identifiers.

Subscription yourSubscription For details about your subscriptions,


see Subscriptions.

Resource group yourResourceGroup For valid resource group names, see


Naming rules and restrictions.

Select source Blank database Specifies that a blank database


should be created.

4. Click Ser ver to use an existing server or create and configure a new server. Either select an existing
server or click Create a new ser ver and fill out the New ser ver form with the following information:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser ver name Any globally unique name For valid server names, see Naming
rules and restrictions.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser ver admin login Any valid name For valid login names, see Database
identifiers.

Password Any valid password Your password must have at least


eight characters and must use
characters from three of the
following categories: uppercase
characters, lowercase characters,
numbers, and non-alphanumeric
characters.

Location Any valid location For information about regions, see


Azure Regions.

5. Click Select .
6. Click Pricing tier to specify the service tier, the number of DTUs or vCores, and the amount of storage.
You may explore the options for the number of DTUs/vCores and storage that is available to you for each
service tier.
After selecting the service tier, the number of DTUs or vCores, and the amount of storage, click Apply .
7. Enter a Collation for the blank database (for this tutorial, use the default value). For more information
about collations, see Collations
8. Now that you've completed the SQL Database form, click Create to provision the database. This step
may take a few minutes.
9. On the toolbar, click Notifications to monitor the deployment process.
Create a server-level IP firewall rule
SQL Database creates an IP firewall at the server-level. This firewall prevents external applications and tools
from connecting to the server and any databases on the server unless a firewall rule allows their IP through the
firewall. To enable external connectivity to your database, you must first add an IP firewall rule for your IP
address (or IP address range). Follow these steps to create a server-level IP firewall rule.

IMPORTANT
SQL Database communicates over port 1433. If you are trying to connect to this service from within a corporate
network, outbound traffic over port 1433 may not be allowed by your network's firewall. If so, you cannot connect to
your database unless your administrator opens port 1433.

1. After the deployment is complete, click SQL databases from the left-hand menu and then click
yourDatabase on the SQL databases page. The overview page for your database opens, showing you
the fully qualified Ser ver name (such as yourserver.database.windows.net) and provides options for
further configuration.
2. Copy this fully qualified server name for use to connect to your server and databases from SQL Server
Management Studio.

3. Click Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
4. Click Add client IP on the toolbar to add your current IP address to a new IP firewall rule. An IP firewall
rule can open port 1433 for a single IP address or a range of IP addresses.
5. Click Save . A server-level IP firewall rule is created for your current IP address opening port 1433 on the
server.
6. Click OK and then close the Firewall settings page.
Your IP address can now pass through the IP firewall. You can now connect to your database using SQL Server
Management Studio or another tool of your choice. Be sure to use the server admin account you created
previously.

IMPORTANT
By default, access through the SQL Database IP firewall is enabled for all Azure services. Click OFF on this page to disable
access for all Azure services.

C# program example
The next sections of this article present a C# program that uses ADO.NET to send Transact-SQL (T-SQL)
statements to SQL Database. The C# program demonstrates the following actions:
Connect to SQL Database using ADO.NET
Methods that return T-SQL statements
Create tables
Populate tables with data
Update, delete, and select data
Submit T-SQL to the database
Entity Relationship Diagram (ERD)
The CREATE TABLE statements involve the REFERENCES keyword to create a foreign key (FK) relationship
between two tables. If you're using tempdb, comment out the --REFERENCES keyword using a pair of leading
dashes.
The ERD displays the relationship between the two tables. The values in the tabEmployee.Depar tmentCode
child column are limited to values from the tabDepar tment.Depar tmentCode parent column.
NOTE
You have the option of editing the T-SQL to add a leading # to the table names, which creates them as temporary
tables in tempdb. This is useful for demonstration purposes, when no test database is available. Any reference to foreign
keys are not enforced during their use and temporary tables are deleted automatically when the connection closes after
the program finishes running.

To compile and run


The C# program is logically one .cs file, and is physically divided into several code blocks, to make each block
easier to understand. To compile and run the program, do the following steps:
1. Create a C# project in Visual Studio. The project type should be a Console, found under Templates >
Visual C# > Windows Desktop > Console App (.NET Framework) .
2. In the file Program.cs, replace the starter lines of code with the following steps:
a. Copy and paste the following code blocks, in the same sequence they're presented, see Connect to
database, Generate T-SQL, and Submit to database.
b. Change the following values in the Main method:
cb.DataSource
cb.UserID
cb.Password
cb.InitialCatalog
3. Verify the assembly System.Data.dll is referenced. To verify, expand the References node in the Solution
Explorer pane.
4. To build and run the program from Visual Studio, select the Star t button. The report output is displayed
in a program window, though GUID values will vary between test runs.
=================================
T-SQL to 2 - Create-Tables...
-1 = rows affected.

=================================
T-SQL to 3 - Inserts...
8 = rows affected.

=================================
T-SQL to 4 - Update-Join...
2 = rows affected.

=================================
T-SQL to 5 - Delete-Join...
2 = rows affected.

=================================
Now, SelectEmployees (6)...
8ddeb8f5-9584-4afe-b7ef-d6bdca02bd35 , Alison , 20 , acct , Accounting
9ce11981-e674-42f7-928b-6cc004079b03 , Barbara , 17 , hres , Human Resources
315f5230-ec94-4edd-9b1c-dd45fbb61ee7 , Carol , 22 , acct , Accounting
fcf4840a-8be3-43f7-a319-52304bf0f48d , Elle , 15 , NULL , NULL
View the report output here, then press any key to end the program...

Connect to SQL Database using ADO.NET


using System;
using System.Data.SqlClient; // System.Data.dll
//using System.Data; // For: SqlDbType , ParameterDirection

namespace csharp_db_test
{
class Program
{
static void Main(string[] args)
{
try
{
var cb = new SqlConnectionStringBuilder();
cb.DataSource = "your_server.database.windows.net";
cb.UserID = "your_user";
cb.Password = "your_password";
cb.InitialCatalog = "your_database";

using (var connection = new SqlConnection(cb.ConnectionString))


{
connection.Open();

Submit_Tsql_NonQuery(connection, "2 - Create-Tables", Build_2_Tsql_CreateTables());

Submit_Tsql_NonQuery(connection, "3 - Inserts", Build_3_Tsql_Inserts());

Submit_Tsql_NonQuery(connection, "4 - Update-Join", Build_4_Tsql_UpdateJoin(),


"@csharpParmDepartmentName", "Accounting");

Submit_Tsql_NonQuery(connection, "5 - Delete-Join", Build_5_Tsql_DeleteJoin(),


"@csharpParmDepartmentName", "Legal");

Submit_6_Tsql_SelectEmployees(connection);
}
}
catch (SqlException e)
{
Console.WriteLine(e.ToString());
}

Console.WriteLine("View the report output here, then press any key to end the program...");
Console.ReadKey();
}

Methods that return T -SQL statements

static string Build_2_Tsql_CreateTables()


{
return @"
DROP TABLE IF EXISTS tabEmployee;
DROP TABLE IF EXISTS tabDepartment; -- Drop parent table last.

CREATE TABLE tabDepartment


(
DepartmentCode nchar(4) not null PRIMARY KEY,
DepartmentName nvarchar(128) not null
);

CREATE TABLE tabEmployee


(
EmployeeGuid uniqueIdentifier not null default NewId() PRIMARY KEY,
EmployeeName nvarchar(128) not null,
EmployeeLevel int not null,
DepartmentCode nchar(4) null
REFERENCES tabDepartment (DepartmentCode) -- (REFERENCES would be disallowed on temporary
tables.)
tables.)
);
";
}

static string Build_3_Tsql_Inserts()


{
return @"
-- The company has these departments.
INSERT INTO tabDepartment (DepartmentCode, DepartmentName)
VALUES
('acct', 'Accounting'),
('hres', 'Human Resources'),
('legl', 'Legal');

-- The company has these employees, each in one department.


INSERT INTO tabEmployee (EmployeeName, EmployeeLevel, DepartmentCode)
VALUES
('Alison' , 19, 'acct'),
('Barbara' , 17, 'hres'),
('Carol' , 21, 'acct'),
('Deborah' , 24, 'legl'),
('Elle' , 15, null);
";
}

static string Build_4_Tsql_UpdateJoin()


{
return @"
DECLARE @DName1 nvarchar(128) = @csharpParmDepartmentName; --'Accounting';

-- Promote everyone in one department (see @parm...).


UPDATE empl
SET
empl.EmployeeLevel += 1
FROM
tabEmployee as empl
INNER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
WHERE
dept.DepartmentName = @DName1;
";
}

static string Build_5_Tsql_DeleteJoin()


{
return @"
DECLARE @DName2 nvarchar(128);
SET @DName2 = @csharpParmDepartmentName; --'Legal';

-- Right size the Legal department.


DELETE empl
FROM
tabEmployee as empl
INNER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
WHERE
dept.DepartmentName = @DName2

-- Disband the Legal department.


DELETE tabDepartment
WHERE DepartmentName = @DName2;
";
}

static string Build_6_Tsql_SelectEmployees()


{
return @"
-- Look at all the final Employees.
SELECT
SELECT
empl.EmployeeGuid,
empl.EmployeeName,
empl.EmployeeLevel,
empl.DepartmentCode,
dept.DepartmentName
FROM
tabEmployee as empl
LEFT OUTER JOIN
tabDepartment as dept ON dept.DepartmentCode = empl.DepartmentCode
ORDER BY
EmployeeName;
";
}

Submit T -SQL to the database


static void Submit_6_Tsql_SelectEmployees(SqlConnection connection)
{
Console.WriteLine();
Console.WriteLine("=================================");
Console.WriteLine("Now, SelectEmployees (6)...");

string tsql = Build_6_Tsql_SelectEmployees();

using (var command = new SqlCommand(tsql, connection))


{
using (SqlDataReader reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine("{0} , {1} , {2} , {3} , {4}",
reader.GetGuid(0),
reader.GetString(1),
reader.GetInt32(2),
(reader.IsDBNull(3)) ? "NULL" : reader.GetString(3),
(reader.IsDBNull(4)) ? "NULL" : reader.GetString(4));
}
}
}
}

static void Submit_Tsql_NonQuery(


SqlConnection connection,
string tsqlPurpose,
string tsqlSourceCode,
string parameterName = null,
string parameterValue = null
)
{
Console.WriteLine();
Console.WriteLine("=================================");
Console.WriteLine("T-SQL to {0}...", tsqlPurpose);

using (var command = new SqlCommand(tsqlSourceCode, connection))


{
if (parameterName != null)
{
command.Parameters.AddWithValue( // Or, use SqlParameter class.
parameterName,
parameterValue);
}
int rowsAffected = command.ExecuteNonQuery();
Console.WriteLine(rowsAffected + " = rows affected.");
}
}
} // EndOfClass
}

Next steps
In this tutorial, you learned basic database tasks such as create a database and tables, connect to the database,
load data, and run queries. You learned how to:
Create a database using the Azure portal
Set up a server-level IP firewall rule using the Azure portal
Connect to the database with ADO.NET and Visual Studio
Create tables with ADO.NET
Insert, update, and delete data with ADO.NET
Query data ADO.NET
Advance to the next tutorial to learn about data migration.
Migrate SQL Server to Azure SQL Database offline using DMS
Tutorial: Add an Azure SQL Database to an auto-
failover group
7/12/2022 • 24 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


A failover group is a declarative abstraction layer that allows you to group multiple geo-replicated databases.
Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal,
PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
Create a database in Azure SQL Database
Create a failover group for the database between two servers.
Test failover.

Prerequisites
Azure portal
PowerShell
Azure CLI

To complete this tutorial, make sure you have:


An Azure subscription. Create a free account if you don't already have one.

1 - Create a database
In this step, you create a resource group, server, single database, and server-level IP firewall rule for access to
the server.
In this step, you create a logical SQL server and a single database that uses AdventureWorksLT sample data. You
can create the database by using Azure portal menus and screens, or by using an Azure CLI or PowerShell script
in the Azure Cloud Shell.
All the methods include setting up a server-level firewall rule to allow the public IP address of the computer
you're using to access the server. For more information about creating server-level firewall rules, see Create a
server-level firewall. You can also set database-level firewall rules. See Create a database-level firewall rule.
Portal
PowerShell
Azure CLI

To create a resource group, server, and single database in the Azure portal:
1. Sign in to the portal.
2. From the Search bar, search for and select Azure SQL .
3. On the Azure SQL page, select Add .
4. On the Select SQL deployment option page, select the SQL databases tile, with Single database
under Resource type . You can view more information about the different databases by selecting Show
details .
5. Select Create .

6. On the Basics tab of the Create SQL database form, under Project details , select the correct Azure
Subscription if it isn't already selected.
7. Under Resource group , select Create new , enter myResourceGroup, and select OK .
8. Under Database details , for Database name enter mySampleDatabase.
9. For Ser ver , select Create new , and fill out the New ser ver form as follows:
Ser ver name : Enter mysqlserver, and some characters for uniqueness.
Ser ver admin login : Enter AzureAdmin.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Drop down and choose a location, such as (US) West US .
Select OK .
Record the server admin login and password so you can sign in to the server and its databases. If you
forget your login or password, you can get the login name or reset the password on the SQL ser ver
page after database creation. To open the SQL ser ver page, select the server name on the database
Over view page.
10. Under Compute + storage , if you want to reconfigure the defaults, select Configure database .
On the Configure page, you can optionally:
Change the Compute tier from Provisioned to Ser verless .
Review and change the settings for vCores and Data max size .
Select Change configuration to change hardware configuration.
After making any changes, select Apply .
11. Select Next: Networking at the bottom of the page.
12. On the Networking tab, under Connectivity method , select Public endpoint .
13. Under Firewall rules , set Add current client IP address to Yes .
14. Select Next: Additional settings at the bottom of the page.
For more information about firewall settings, see Allow Azure services and resources to access this server
and Add a private endpoint.
15. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
16. Optionally, enable Microsoft Defender for SQL.
17. Optionally, set the maintenance window so planned maintenance is performed at the best time for your
database.
18. Select Review + create at the bottom of the page.
19. After reviewing settings, select Create .

2 - Create the failover group


In this step, you' will create a failover group between an existing server and a new server in another region.
Then add the sample database to the failover group.

Azure portal
PowerShell
Azure CLI

Create your failover group and add your database to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the database created in section 1, such as mySampleDatabase .
3. Failover groups can be configured at the server level. Select the name of the server under Ser ver name
to open the settings for the server.

4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.

5. On the Failover Group page, enter or select the following values, and then select Create :
Failover group name : Type in a unique failover group name, such as failovergrouptutorial .
Secondar y ser ver : Select the option to configure required settings and then choose to Create a
new ser ver . Alternatively, you can choose an already-existing server as the secondary server.
After entering the following values, select Select .
Ser ver name : Type in a unique name for the secondary server, such as mysqlsecondary .
Ser ver admin login : Type azureuser
Password : Type a complex password that meets password requirements.
Location : Choose a location from the drop-down, such as East US . This location can't be the
same location as your primary server.

NOTE
The server login and firewall settings must match that of your primary server.
Databases within the group : Once a secondary server is selected, this option becomes
unlocked. Select it to Select databases to add and then choose the database you created in
section 1. Adding the database to the failover group will automatically start the geo-replication
process.

3 - Test failover
In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure
portal.

Azure portal
PowerShell
Azure CLI

Test failover using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the database created in the section 2, such as mySampleDatbase .
3. Select the name of the server under Ser ver name to open the settings for the server.
4. Select Failover groups under the Settings pane and then choose the failover group you created in
section 2.

5. Review which server is primary and which server is secondary.


6. Select Failover from the task pane to fail over your failover group containing your sample database.
7. Select Yes on the warning that notifies you that TDS sessions will be disconnected.
8. Review which server is now primary and which server is secondary. If failover succeeded, the two servers
should have swapped roles.
9. Select Failover again to fail the servers back to their original roles.

Clean up resources
Clean up resources by deleting the resource group.

Azure portal
PowerShell
Azure CLI

Delete the resource group using the Azure portal.


1. Navigate to your resource group in the Azure portal.
2. Select Delete resource group to delete all the resources in the group, as well as the resource group itself.
3. Type the name of the resource group, myResourceGroup , in the textbox, and then select Delete to delete the
resource group.

IMPORTANT
If you want to keep the resource group but delete the secondary database, remove it from the failover group before
deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.

Full scripts
PowerShell
Azure CLI
Azure portal

There are no scripts available for the Azure portal.


For additional Azure SQL Database scripts, see: Azure PowerShell and Azure CLI.

Next steps
In this tutorial, you added a database in Azure SQL Database to a failover group, and tested failover. You learned
how to:
Create a database in Azure SQL Database
Create a failover group for the database between two servers.
Test failover.
Advance to the next tutorial on how to add your elastic pool to a failover group.
Tutorial: Add an Azure SQL Database elastic pool to a failover group
Tutorial: Add an Azure SQL Database elastic pool to
a failover group
7/12/2022 • 28 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Configure an auto-failover group for an Azure SQL Database elastic pool and test failover using the Azure
portal.
In this tutorial, you'll learn how to:
Create a single database.
Add the database to an elastic pool.
Create a failover group for two elastic pools between two servers.
Test failover.

Prerequisites
Azure portal
PowerShell
Azure CLI

To complete this tutorial, make sure you have:


An Azure subscription. Create a free account if you don't already have one.

1 - Create a single database


In this step, you create a resource group, server, single database, and server-level IP firewall rule for access to
the server.
In this step, you create a logical SQL server and a single database that uses AdventureWorksLT sample data. You
can create the database by using Azure portal menus and screens, or by using an Azure CLI or PowerShell script
in the Azure Cloud Shell.
All the methods include setting up a server-level firewall rule to allow the public IP address of the computer
you're using to access the server. For more information about creating server-level firewall rules, see Create a
server-level firewall. You can also set database-level firewall rules. See Create a database-level firewall rule.
Portal
PowerShell
Azure CLI

To create a resource group, server, and single database in the Azure portal:
1. Sign in to the portal.
2. From the Search bar, search for and select Azure SQL .
3. On the Azure SQL page, select Add .
4. On the Select SQL deployment option page, select the SQL databases tile, with Single database
under Resource type . You can view more information about the different databases by selecting Show
details .
5. Select Create .

6. On the Basics tab of the Create SQL database form, under Project details , select the correct Azure
Subscription if it isn't already selected.
7. Under Resource group , select Create new , enter myResourceGroup, and select OK .
8. Under Database details , for Database name enter mySampleDatabase.
9. For Ser ver , select Create new , and fill out the New ser ver form as follows:
Ser ver name : Enter mysqlserver, and some characters for uniqueness.
Ser ver admin login : Enter AzureAdmin.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Drop down and choose a location, such as (US) West US .
Select OK .
Record the server admin login and password so you can sign in to the server and its databases. If you
forget your login or password, you can get the login name or reset the password on the SQL ser ver
page after database creation. To open the SQL ser ver page, select the server name on the database
Over view page.
10. Under Compute + storage , if you want to reconfigure the defaults, select Configure database .
On the Configure page, you can optionally:
Change the Compute tier from Provisioned to Ser verless .
Review and change the settings for vCores and Data max size .
Select Change configuration to change hardware configuration.
After making any changes, select Apply .
11. Select Next: Networking at the bottom of the page.
12. On the Networking tab, under Connectivity method , select Public endpoint .
13. Under Firewall rules , set Add current client IP address to Yes .
14. Select Next: Additional settings at the bottom of the page.
For more information about firewall settings, see Allow Azure services and resources to access this server
and Add a private endpoint.
15. On the Additional settings tab, in the Data source section, for Use existing data , select Sample .
16. Optionally, enable Microsoft Defender for SQL.
17. Optionally, set the maintenance window so planned maintenance is performed at the best time for your
database.
18. Select Review + create at the bottom of the page.
19. After reviewing settings, select Create .

2 - Add the database to an elastic pool


In this step, you'll create an elastic pool and add your database to it.

Azure portal
PowerShell
Azure CLI

Create your elastic pool using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select + Add to open the Select SQL deployment option page. You can view additional information
about the different databases by selecting Show details on the Databases tile.
3. Select Elastic pool from the Resource type drop-down in the SQL Databases tile. Select Create to
create your elastic pool.

4. Configure your elastic pool with the following values:


Name : Provide a unique name for your elastic pool, such as myElasticPool .
Subscription : Select your subscription from the drop-down.
ResourceGroup : Select myResourceGroup from the drop-down, the resource group you created in
section 1.
Ser ver : Select the server you created in section 1 from the drop-down.
Compute + storage : Select Configure elastic pool to configure your compute, storage, and
add your single database to your elastic pool. On the Pool Settings tab, leave the default of Gen5,
with 2 vCores and 32gb.
5. On the Configure page, select the Databases tab, and then choose to Add database . Choose the
database you created in section 1 and then select Apply to add it to your elastic pool. Select Apply again
to apply your elastic pool settings and close the Configure page.

6. Select Review + create to review your elastic pool settings and then select Create to create your elastic
pool.

3 - Create the failover group


In this step, you'll create a failover group between an existing server and a new server in another region. Then
add the elastic pool to the failover group.
Azure portal
PowerShell
Azure CLI

Create your failover group using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the elastic pool created in the previous section, such as myElasticPool .
3. On the Over view pane, select the name of the server under Ser ver name to open the settings for the
server.

4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the following values, and then select Create :
Failover group name : Type in a unique failover group name, such as failovergrouptutorial .
Secondar y ser ver : Select the option to configure required settings and then choose to Create a
new ser ver . Alternatively, you can choose an already-existing server as the secondary server.
After entering the following values for your new secondary server, select Select .
Ser ver name : Type in a unique name for the secondary server, such as mysqlsecondary .
Ser ver admin login : Type azureuser
Password : Type a complex password that meets password requirements.
Location : Choose a location from the drop-down, such as East US . This location can't be the
same location as your primary server.

NOTE
The server login and firewall settings must match that of your primary server.

6. Select Databases within the group then select the elastic pool you created in section 2. A warning
should appear, prompting you to create an elastic pool on the secondary server. Select the warning, and
then select OK to create the elastic pool on the secondary server.
7. Select Select to apply your elastic pool settings to the failover group, and then select Create to create
your failover group. Adding the elastic pool to the failover group will automatically start the geo-
replication process.

4 - Test failover
In this step, you'll fail your failover group over to the secondary server, and then fail back using the Azure portal.

Azure portal
PowerShell
Azure CLI

Test failover of your failover group using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the elastic pool created in the previous section, such as myElasticPool .
3. Select the name of the server under Ser ver name to open the settings for the server.

4. Select Failover groups under the Settings pane and then choose the failover group you created in
section 2.
5. Review which server is primary, and which server is secondary.
6. Select Failover from the task pane to fail over your failover group containing your elastic pool.
7. Select Yes on the warning that notifies you that TDS sessions will be disconnected.

8. Review which server is primary, which server is secondary. If failover succeeded, the two servers should
have swapped roles.
9. Select Failover again to fail the failover group back to the original settings.

Clean up resources
Clean up resources by deleting the resource group.

Azure portal
PowerShell
Azure CLI

1. Navigate to your resource group in the Azure portal.


2. Select Delete resource group to delete all the resources in the group, as well as the resource group itself.
3. Type the name of the resource group, myResourceGroup , in the textbox, and then select Delete to delete the
resource group.

IMPORTANT
If you want to keep the resource group but delete the secondary database, remove it from the failover group before
deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.

Full script
PowerShell
Azure CLI
Azure portal

There are no scripts available for the Azure portal.

Next steps
In this tutorial, you added an Azure SQL Database elastic pool to a failover group, and tested failover. You
learned how to:
Create a single database.
Add the database into an elastic pool.
Create a failover group for two elastic pools between two servers.
Test failover.
Advance to the next tutorial on how to migrate using DMS.
Tutorial: Migrate SQL Server to a pooled database using DMS
Configure and manage Azure SQL Database
security for geo-restore or failover
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes the authentication requirements to configure and control active geo-replication and auto-
failover groups. It also provides the steps required to set up user access to the secondary database. Finally, it
also describes how to enable access to the recovered database after using geo-restore. For more information on
recovery options, see Business Continuity Overview.

Disaster recovery with contained users


Unlike traditional users, which must be mapped to logins in the master database, a contained user is managed
completely by the database itself. This has two benefits. In the disaster recovery scenario, the users can continue
to connect to the new primary database or the database recovered using geo-restore without any additional
configuration, because the database manages the users. There are also potential scalability and performance
benefits from this configuration from a login perspective. For more information, see Contained Database Users -
Making Your Database Portable.
The main trade-off is that managing the disaster recovery process at scale is more challenging. When you have
multiple databases that use the same login, maintaining the credentials using contained users in multiple
databases may negate the benefits of contained users. For example, the password rotation policy requires that
changes be made consistently in multiple databases rather than changing the password for the login once in the
master database. For this reason, if you have multiple databases that use the same user name and password,
using contained users is not recommended.

How to configure logins and users


If you are using logins and users (rather than contained users), you must take extra steps to ensure that the
same logins exist in the master database. The following sections outline the steps involved and additional
considerations.

NOTE
It is also possible to use Azure Active Directory (AAD) logins to manage your databases. For more information, see Azure
SQL logins and users.

Set up user access to a secondary or recovered database


In order for the secondary database to be usable as a read-only secondary database, and to ensure proper
access to the new primary database or the database recovered using geo-restore, the master database of the
target server must have the appropriate security configuration in place before the recovery.
The specific permissions for each step are described later in this topic.
Preparing user access to a geo-replication secondary should be performed as part configuring geo-replication.
Preparing user access to the geo-restored databases should be performed at any time when the original server
is online (e.g. as part of the DR drill).
NOTE
If you fail over or geo-restore to a server that does not have properly configured logins, access to it will be limited to the
server admin account.

Setting up logins on the target server involves three steps outlined below:
1. Determine logins with access to the primary database
The first step of the process is to determine which logins must be duplicated on the target server. This is
accomplished with a pair of SELECT statements, one in the logical master database on the source server and one
in the primary database itself.
Only the server admin or a member of the LoginManager server role can determine the logins on the source
server with the following SELECT statement.

SELECT [name], [sid]


FROM [sys].[sql_logins]
WHERE [type_desc] = 'SQL_Login'

Only a member of the db_owner database role, the dbo user, or server admin, can determine all of the database
user principals in the primary database.

SELECT [name], [sid]


FROM [sys].[database_principals]
WHERE [type_desc] = 'SQL_USER'

2. Find the SID for the logins identified in step 1


By comparing the output of the queries from the previous section and matching the SIDs, you can map the
server login to database user. Logins that have a database user with a matching SID have user access to that
database as that database user principal.
The following query can be used to see all of the user principals and their SIDs in a database. Only a member of
the db_owner database role or server admin can run this query.

SELECT [name], [sid]


FROM [sys].[database_principals]
WHERE [type_desc] = 'SQL_USER'

NOTE
The INFORMATION_SCHEMA and sys users have NULL SIDs, and the guest SID is 0x00 . The dbo SID may start with
0x01060000000001648000000000048454, if the database creator was the server admin instead of a member of
DbManager .

3. Create the logins on the target server


The last step is to go to the target server, or servers, and generate the logins with the appropriate SIDs. The basic
syntax is as follows.

CREATE LOGIN [<login name>]


WITH PASSWORD = '<login password>',
SID = 0x1234 /*replace 0x1234 with the desired login SID*/
NOTE
If you want to grant user access to the secondary, but not to the primary, you can do that by altering the user login on
the primary server by using the following syntax.

ALTER LOGIN [<login name>] DISABLE

DISABLE doesn’t change the password, so you can always enable it if needed.

Next steps
For more information on managing database access and logins, see SQL Database security: Manage
database access and login security.
For more information on contained database users, see Contained Database Users - Making Your Database
Portable.
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about using geo-restore, see geo-restore
Tutorial: Implement a geo-distributed database
(Azure SQL Database)
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Configure a database in SQL Database and client application for failover to a remote region and test a failover
plan. You learn how to:
Create a failover group
Run a Java application to query a database in SQL Database
Test failover
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

To complete the tutorial, make sure you've installed the following items:
Azure PowerShell
A single database in Azure SQL Database. To create one use,
The Azure Portal
The Azure CLI
PowerShell

NOTE
The tutorial uses the AdventureWorksLT sample database.

Java and Maven, see Build an app using SQL Server, highlight Java and select your environment, then
follow the steps.
IMPORTANT
Be sure to set up firewall rules to use the public IP address of the computer on which you're performing the steps in this
tutorial. Database-level firewall rules will replicate automatically to the secondary server.
For information see Create a database-level firewall rule or to determine the IP address used for the server-level firewall
rule for your computer see Create a server-level firewall.

Create a failover group


Using Azure PowerShell, create failover groups between an existing server and a new server in another region.
Then add the sample database to the failover group.
PowerShell
The Azure CLI

IMPORTANT
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.

To create a failover group, run the following script:

$admin = "<adminName>"
$password = "<password>"
$resourceGroup = "<resourceGroupName>"
$location = "<resourceGroupLocation>"
$server = "<serverName>"
$database = "<databaseName>"
$drLocation = "<disasterRecoveryLocation>"
$drServer = "<disasterRecoveryServerName>"
$failoverGroup = "<globallyUniqueFailoverGroupName>"

# create a backup server in the failover region


New-AzSqlServer -ResourceGroupName $resourceGroup -ServerName $drServer `
-Location $drLocation -SqlAdministratorCredentials $(New-Object -TypeName
System.Management.Automation.PSCredential `
-ArgumentList $admin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))

# create a failover group between the servers


New-AzSqlDatabaseFailoverGroup –ResourceGroupName $resourceGroup -ServerName $server `
-PartnerServerName $drServer –FailoverGroupName $failoverGroup –FailoverPolicy Automatic -
GracePeriodWithDataLossHours 2

# add the database to the failover group


Get-AzSqlDatabase -ResourceGroupName $resourceGroup -ServerName $server -DatabaseName $database | `
Add-AzSqlDatabaseToFailoverGroup -ResourceGroupName $resourceGroup -ServerName $server -
FailoverGroupName $failoverGroup

Geo-replication settings can also be changed in the Azure portal, by selecting your database, then Settings >
Geo-Replication .
Run the sample project
1. In the console, create a Maven project with the following command:

mvn archetype:generate "-DgroupId=com.sqldbsamples" "-DartifactId=SqlDbSample" "-


DarchetypeArtifactId=maven-archetype-quickstart" "-Dversion=1.0.0"

2. Type Y and press Enter .


3. Change directories to the new project.

cd SqlDbSample

4. Using your favorite editor, open the pom.xml file in your project folder.
5. Add the Microsoft JDBC Driver for SQL Server dependency by adding the following dependency section.
The dependency must be pasted within the larger dependencies section.

<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version>
</dependency>

6. Specify the Java version by adding the properties section after the dependencies section:
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>

7. Support manifest files by adding the build section after the properties section:

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.0.0</version>
<configuration>
<archive>
<manifest>
<mainClass>com.sqldbsamples.App</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>

8. Save and close the pom.xml file.


9. Open the App.java file located in ..\SqlDbSample\src\main\java\com\sqldbsamples and replace the
contents with the following code:

package com.sqldbsamples;

import java.sql.Connection;
import java.sql.Statement;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.Timestamp;
import java.sql.DriverManager;
import java.util.Date;
import java.util.concurrent.TimeUnit;

public class App {

private static final String FAILOVER_GROUP_NAME = "<your failover group name>"; // add failover
group name

private static final String DB_NAME = "<your database>"; // add database name
private static final String USER = "<your admin>"; // add database user
private static final String PASSWORD = "<your password>"; // add database password

private static final String READ_WRITE_URL = String.format("jdbc:" +


"sqlserver://%s.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
"hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD);
private static final String READ_ONLY_URL = String.format("jdbc:" +

"sqlserver://%s.secondary.database.windows.net:1433;database=%s;user=%s;password=%s;encrypt=true;" +
"hostNameInCertificate=*.database.windows.net;loginTimeout=30;",
FAILOVER_GROUP_NAME, DB_NAME, USER, PASSWORD);

public static void main(String[] args) {


System.out.println("#######################################");
System.out.println("## GEO DISTRIBUTED DATABASE TUTORIAL ##");
System.out.println("#######################################");
System.out.println("");
System.out.println("");

int highWaterMark = getHighWaterMarkId();

try {
for(int i = 1; i < 1000; i++) {
// loop will run for about 1 hour
System.out.print(i + ": insert on primary " +
(insertData((highWaterMark + i)) ? "successful" : "failed"));
TimeUnit.SECONDS.sleep(1);
System.out.print(", read from secondary " +
(selectData((highWaterMark + i)) ? "successful" : "failed") + "\n");
TimeUnit.SECONDS.sleep(3);
}
} catch(Exception e) {
e.printStackTrace();
}
}

private static boolean insertData(int id) {


// Insert data into the product table with a unique product name so we can find the product again
String sql = "INSERT INTO SalesLT.Product " +
"(Name, ProductNumber, Color, StandardCost, ListPrice, SellStartDate) VALUES (?,?,?,?,?,?);";

try (Connection connection = DriverManager.getConnection(READ_WRITE_URL);


PreparedStatement pstmt = connection.prepareStatement(sql)) {
pstmt.setString(1, "BrandNewProduct" + id);
pstmt.setInt(2, 200989 + id + 10000);
pstmt.setString(3, "Blue");
pstmt.setDouble(4, 75.00);
pstmt.setDouble(5, 89.99);
pstmt.setTimestamp(6, new Timestamp(new Date().getTime()));
return (1 == pstmt.executeUpdate());
} catch (Exception e) {
return false;
}
}

private static boolean selectData(int id) {


// Query the data previously inserted into the primary database from the geo replicated database
String sql = "SELECT Name, Color, ListPrice FROM SalesLT.Product WHERE Name = ?";

try (Connection connection = DriverManager.getConnection(READ_ONLY_URL);


PreparedStatement pstmt = connection.prepareStatement(sql)) {
pstmt.setString(1, "BrandNewProduct" + id);
try (ResultSet resultSet = pstmt.executeQuery()) {
return resultSet.next();
}
} catch (Exception e) {
return false;
}
}

private static int getHighWaterMarkId() {


// Query the high water mark id stored in the table to be able to make unique inserts
String sql = "SELECT MAX(ProductId) FROM SalesLT.Product";
int result = 1;
try (Connection connection = DriverManager.getConnection(READ_WRITE_URL);
Statement stmt = connection.createStatement();
ResultSet resultSet = stmt.executeQuery(sql)) {
if (resultSet.next()) {
result = resultSet.getInt(1);
}
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
}
10. Save and close the App.java file.
11. In the command console, run the following command:

mvn package

12. Start the application that will run for about 1 hour until stopped manually, allowing you time to run the
failover test.

mvn -q -e exec:java "-Dexec.mainClass=com.sqldbsamples.App"

#######################################
## GEO DISTRIBUTED DATABASE TUTORIAL ##
#######################################

1. insert on primary successful, read from secondary successful


2. insert on primary successful, read from secondary successful
3. insert on primary successful, read from secondary successful
...

Test failover
Run the following scripts to simulate a failover and observe the application results. Notice how some inserts
and selects will fail during the database migration.

PowerShell
The Azure CLI

You can check the role of the disaster recovery server during the test with the following command:

(Get-AzSqlDatabaseFailoverGroup -FailoverGroupName $failoverGroup `


-ResourceGroupName $resourceGroup -ServerName $drServer).ReplicationRole

To test a failover:
1. Start a manual failover of the failover group:

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup `


-ServerName $drServer -FailoverGroupName $failoverGroup

2. Revert failover group back to the primary server:

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup `


-ServerName $server -FailoverGroupName $failoverGroup

Next steps
In this tutorial, you configured a database in Azure SQL Database and an application for failover to a remote
region and tested a failover plan. You learned how to:
Create a geo-replication failover group
Run a Java application to query a database in SQL Database
Test failover
Advance to the next tutorial on how to add an instance of Azure SQL Managed Instance to a failover group:
Add an instance of Azure SQL Managed Instance to a failover group
Tutorial: Configure active geo-replication and
failover (Azure SQL Database)
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article shows you how to configure active geo-replication for Azure SQL Database using the Azure portal or
Azure CLI and to initiate failover.
For best practices using auto-failover groups, see Auto-failover groups with Azure SQL Database and Auto-
failover groups with Azure SQL Managed Instance.

Prerequisites
Portal
Azure CLI

To configure active geo-replication by using the Azure portal, you need the following resource:
A database in Azure SQL Database: The primary database that you want to replicate to a different
geographical region.

NOTE
When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a
secondary database is required to be in a different subscription, use Create Database REST API or ALTER DATABASE
Transact-SQL API.

Add a secondary database


The following steps create a new secondary database in a geo-replication partnership.
To add a secondary database, you must be the subscription owner or co-owner.
The secondary database has the same name as the primary database and has, by default, the same service tier
and compute size. The secondary database can be a single database or a pooled database. For more
information, see DTU-based purchasing model and vCore-based purchasing model. After the secondary is
created and seeded, data begins replicating from the primary database to the new secondary database.

NOTE
If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the
command fails.

Portal
Azure CLI

1. In the Azure portal, browse to the database that you want to set up for geo-replication.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .

3. Select or create the server for the secondary database, and configure the Compute + storage options if
necessary. You can select any region for your secondary server, but we recommend the paired region.

Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a
pool, select Yes next to Want to use SQL elastic pool? and select a pool on the target server. A pool
must already exist on the target server. This workflow doesn't create a pool.
4. Click Review + create , review the information, and then click Create .
5. The secondary database is created and the deployment process begins.
6. When the deployment is complete, the secondary database displays its status.

7. Return to the primary database page, and then select Replicas . Your secondary database is listed under
Geo replicas .

Initiate a failover
The secondary database can be switched to become the primary.

Portal
Azure CLI

1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Scroll to Data management , and then select Replicas .
3. In the Geo replicas list, select the database you want to become the new primary, select the ellipsis, and
then select Forced failover .

4. Select Yes to begin the failover.


The command immediately switches the secondary database into the primary role. This process normally
should complete within 30 seconds or less.
There's a short period during which both databases are unavailable, on the order of 0 to 25 seconds, while the
roles are switched. If the primary database has multiple secondary databases, the command automatically
reconfigures the other secondaries to connect to the new primary. The entire operation should take less than a
minute to complete under normal circumstances.
NOTE
This command is designed for quick recovery of the database in case of an outage. It triggers failover without data
synchronization, or forced failover. If the primary is online and committing transactions when the command is issued
some data loss may occur.

Remove secondary database


This operation permanently stops the replication to the secondary database, and changes the role of the
secondary to a regular read-write database. If the connectivity to the secondary database is broken, the
command succeeds but the secondary doesn't become read-write until after connectivity is restored.

Portal
Azure CLI

1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Select Replicas .
3. In the Geo replicas list, select the database you want to remove from the geo-replication partnership,
select the ellipsis, and then select Stop replication .

4. A confirmation window opens. Click Yes to remove the database from the geo-replication partnership.
(Set it to a read-write database not part of any replication.)

Next steps
To learn more about active geo-replication, see active geo-replication.
To learn about auto-failover groups, see Auto-failover groups
For a business continuity overview and scenarios, see Business continuity overview.
Tutorial: Getting started with Always Encrypted with
secure enclaves in Azure SQL Database
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This tutorial teaches you how to get started with Always Encrypted with secure enclaves in Azure SQL Database.
It will show you:
How to create an environment for testing and evaluating Always Encrypted with secure enclaves.
How to encrypt data in-place and issue rich confidential queries against encrypted columns using SQL
Server Management Studio (SSMS).

Prerequisites
An active Azure subscription. If you don't have one, create a free account. You need to be a member of the
Contributor role or the Owner role for the subscription to be able to create resources and configure an
attestation policy.
SQL Server Management Studio (SSMS), version 18.9.1 or later. See Download SQL Server Management
Studio (SSMS) for information on how to download SSMS.
PowerShell requirements

NOTE
The prerequisites listed in this section apply only if you choose to use PowerShell for some of the steps in this tutorial. If
you plan to use Azure portal instead, you can skip this section.

Make sure the following PowerShell modules are installed on your machine.
1. Az version 6.5.0 or later. For details on how to install the Az PowerShell module, see Install the Azure Az
PowerShell module. To determine the version the Az module installed on your machine, run the following
command from a PowerShell session.

Get-InstalledModule -Name Az

The PowerShell Gallery has deprecated Transport Layer Security (TLS) versions 1.0 and 1.1. TLS 1.2 or a later
version is recommended. You may receive the following errors if you are using a TLS version lower than 1.2:
WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2'
PackageManagement\Install-Package: No match was found for the specified search criteria and module name.

To continue to interact with the PowerShell Gallery, run the following command before the Install-Module
commands

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Step 1: Create and configure a server and a DC-series database


In this step, you will create a new Azure SQL Database logical server and a new database using DC-series
hardware, required for Always Encrypted with secure enclaves. For more information see DC-series.
Portal
PowerShell

1. Browse to the Select SQL deployment option page.


2. If you are not already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .

4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter ContosoHR .
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter mysqlserver, and add some characters for uniqueness. We can't provide an exact
server name to use because server names must be globally unique for all servers in Azure, not just
unique within a subscription. So enter something like mysqlserver135, and the portal lets you know if
it is available or not.
Ser ver admin login : Enter an admin login name, for example: azureuser.
Password : Enter a password that meets requirements, and enter it again in the Confirm password
field.
Location : Select a location from the dropdown list.

IMPORTANT
You need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft
Azure Attestation. For the list of regions supporting DC-series, see DC-series availability. Here is the
regional availability of Microsoft Azure Attestation.

Select OK .
8. Leave Want to use SQL elastic pool set to No .
9. Under Compute + storage , select Configure database , and click Change configuration .
10. Select the DC-series hardware configuration, and then select OK .

11. Select Apply .


12. Back on the Basics tab, verify Compute + storage is set to General Purpose , DC, 2 vCores, 32 GB
storage .
13. Select Next: Networking at the bottom of the page.
14. On the Networking tab, for Connectivity method , select Public endpoint .
15. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
16. Select Review + create at the bottom of the page.
17. On the Review + create page, after reviewing, select Create .

Step 2: Configure an attestation provider


In this step, you'll create and configure an attestation provider in Microsoft Azure Attestation. This is needed to
attest the secure enclave your database uses.

Portal
PowerShell

1. Browse to the Create attestation provider page.


2. On the Create attestation provider page, provide the following inputs:
Subscription : Choose the same subscription you created the Azure SQL logical server in.
Resource Group : Choose the same resource group you created the Azure SQL logical server in.
Name : Enter myattestprovider, and add some characters for uniqueness. We can't provide an exact
attestation provider name to use because names must be globally unique. So enter something like
myattestprovider12345, and the portal lets you know if it is available or not.
Location : Choose the location, in which you created the Azure SQL logical server in.
Policy signer cer tificates file : Leave this field empty, as you will configure an unsigned policy.
3. After you provide the required inputs, select Review + create .
4. Select Create .
5. Once the attestation provider is created, click Go to resource .
6. On the Over view tab for the attestation provider, copy the value of the Attest URI property to clipboard
and save it in a file. This is the attestation URL, you will need in later steps.

7. Select Policy on the resource menu on the left side of the window or on the lower pane.
8. Set Attestation Type to SGX-IntelSDK .
9. Select Configure on the upper menu.

10. Set Policy Format to Text . Leave Policy options set to Enter policy .
11. In the Policy text field, replace the default policy with the below policy. For information about the below
policy, see Create and configure an attestation provider.

version= 1.0;
authorizationrules
{
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner",
value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit();
};

12. Click Save .


13. Click Refresh on the upper menu to view the configured policy.

Step 3: Populate your database


In this step, you'll create a table and populate it with some data that you'll later encrypt and query.
1. Open SSMS and connect to the ContosoHR database in the Azure SQL logical server you created
without Always Encrypted enabled in the database connection.
a. In the Connect to Ser ver dialog, specify the fully qualified name of your server (for example,
myserver135.database.windows.net), and enter the administrator user name and the password
you specified when you created the server.
b. Click Options >> and select the Connection Proper ties tab. Make sure to select the
ContosoHR database (not the default, master database).
c. Select the Always Encr ypted tab.
d. Make sure the Enable Always Encr ypted (column encr yption) checkbox is not selected.

e. Click Connect .
2. Create a new table, named Employees .
CREATE SCHEMA [HR];
GO

CREATE TABLE [HR].[Employees]


(
[EmployeeID] [int] IDENTITY(1,1) NOT NULL,
[SSN] [char](11) NOT NULL,
[FirstName] [nvarchar](50) NOT NULL,
[LastName] [nvarchar](50) NOT NULL,
[Salary] [money] NOT NULL
) ON [PRIMARY];
GO

3. Add a few employee records to the Employees table.

INSERT INTO [HR].[Employees]


([SSN]
,[FirstName]
,[LastName]
,[Salary])
VALUES
('795-73-9838'
, N'Catherine'
, N'Abel'
, $31692);

INSERT INTO [HR].[Employees]


([SSN]
,[FirstName]
,[LastName]
,[Salary])
VALUES
('990-00-6818'
, N'Kim'
, N'Abercrombie'
, $55415);

Step 4: Provision enclave-enabled keys


In this step, you'll create a column master key and a column encryption key that allow enclave computations.
1. Using the SSMS instance from the previous step, in Object Explorer , expand your database and
navigate to Security > Always Encr ypted Keys .
2. Provision a new enclave-enabled column master key:
a. Right-click Always Encr ypted Keys and select New Column Master Key....
b. Select your column master key name: CMK1 .
c. Make sure you select either Windows Cer tificate Store (Current User or Local Machine) or
Azure Key Vault .
d. Select Allow enclave computations .
e. If you selected Azure Key Vault, sign into Azure and select your key vault. For more information on
how to create a key vault for Always Encrypted, see Manage your key vaults from Azure portal.
f. Select your certificate or Azure Key Value key if it already exists, or click the Generate Cer tificate
button to create a new one.
g. Select OK .
3. Create a new enclave-enabled column encryption key:
a. Right-click Always Encr ypted Keys and select New Column Encr yption Key .
b. Enter a name for the new column encryption key: CEK1 .
c. In the Column master key dropdown, select the column master key you created in the previous
steps.
d. Select OK .

Step 5: Encrypt some columns in place


In this step, you'll encrypt the data stored in the SSN and Salar y columns inside the server-side enclave, and
then test a SELECT query on the data.
1. Open a new SSMS instance and connect to your database with Always Encrypted enabled for the
database connection.
a. Start a new instance of SSMS.
b. In the Connect to Ser ver dialog, specify the fully qualified name of your server (for example,
myserver135.database.windows.net), and enter the administrator user name and the password
you specified when you created the server.
c. Click Options >> and select the Connection Proper ties tab. Make sure to select the
ContosoHR database (not the default, master database).
d. Select the Always Encr ypted tab.
e. Make sure the Enable Always Encr ypted (column encr yption) checkbox is selected.
f. Specify your enclave attestation URL that you've obtained by following the steps in Step 2:
Configure an attestation provider. See the below screenshot.
g. Select Connect .
h. If you're prompted to enable Parameterization for Always Encrypted queries, select Enable .
2. Using the same SSMS instance (with Always Encrypted enabled), open a new query window and encrypt
the SSN and Salar y columns by running the below statements.

ALTER TABLE [HR].[Employees]


ALTER COLUMN [SSN] [char] (11) COLLATE Latin1_General_BIN2
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM =
'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);

ALTER TABLE [HR].[Employees]


ALTER COLUMN [Salary] [money]
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM =
'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);

ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;


NOTE
Notice the ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE statement to clear the query
plan cache for the database in the above script. After you have altered the table, you need to clear the plans for all
batches and stored procedures that access the table to refresh parameters encryption information.

3. To verify the SSN and Salar y columns are now encrypted, open a new query window in the SSMS
instance without Always Encrypted enabled for the database connection and execute the below
statement. The query window should return encrypted values in the SSN and Salar y columns. If you
execute the same query using the SSMS instance with Always Encrypted enabled, you should see the
data decrypted.

SELECT * FROM [HR].[Employees];

Step 6: Run rich queries against encrypted columns


You can run rich queries against the encrypted columns. Some query processing will be performed inside your
server-side enclave.
1. In the SSMS instance with Always Encrypted enabled, make sure Parameterization for Always Encrypted
is also enabled.
a. Select Tools from the main menu of SSMS.
b. Select Options....
c. Navigate to Quer y Execution > SQL Ser ver > Advanced .
d. Ensure that Enable Parameterization for Always Encr ypted is checked.
e. Select OK .
2. Open a new query window, paste in the below query, and execute. The query should return plaintext
values and rows meeting the specified search criteria.

DECLARE @SSNPattern [char](11) = '%6818';


DECLARE @MinSalary [money] = $1000;
SELECT * FROM [HR].[Employees]
WHERE SSN LIKE @SSNPattern AND [Salary] >= @MinSalary;

3. Try the same query again in the SSMS instance that doesn't have Always Encrypted enabled. A failure
should occur.

Next steps
After completing this tutorial, you can go to one of the following tutorials:
Tutorial: Develop a .NET application using Always Encrypted with secure enclaves
Tutorial: Develop a .NET Framework application using Always Encrypted with secure enclaves
Tutorial: Creating and using indexes on enclave-enabled columns using randomized encryption

See also
Configure and use Always Encrypted with secure enclaves
Tutorial: Secure a database in Azure SQL Database
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial you learn how to:
Create server-level and database-level firewall rules
Configure an Azure Active Directory (Azure AD) administrator
Manage user access with SQL authentication, Azure AD authentication, and secure connection strings
Enable security features, such as Microsoft Defender for SQL, auditing, data masking, and encryption
Azure SQL Database secures data by allowing you to:
Limit access using firewall rules
Use authentication mechanisms that require identity
Use authorization with role-based memberships and permissions
Enable security features

NOTE
Azure SQL Managed Instance is secured using network security rules and private endpoints as described in Azure SQL
Managed Instance and connectivity architecture.

To learn more, see the Azure SQL Database security overview and capabilities articles.

TIP
The following Microsoft Learn module helps you learn for free about how to Secure your database in Azure SQL Database.

Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
SQL Server Management Studio
A server and a single database
Create them with the Azure portal, CLI, or PowerShell
If you don't have an Azure subscription, create a free account before you begin.

Sign in to the Azure portal


For all steps in the tutorial, sign in to the Azure portal

Create firewall rules


Databases in SQL Database are protected by firewalls in Azure. By default, all connections to the server and
database are rejected. To learn more, see server-level and database-level firewall rules.
Set Allow access to Azure ser vices to OFF for the most secure configuration. Then, create a reserved IP
(classic deployment) for the resource that needs to connect, such as an Azure VM or cloud service, and only
allow that IP address access through the firewall. If you're using the Resource Manager deployment model, a
dedicated public IP address is required for each resource.

NOTE
SQL Database communicates over port 1433. If you're trying to connect from within a corporate network, outbound
traffic over port 1433 may not be allowed by your network's firewall. If so, you can't connect to the server unless your
administrator opens port 1433.

Set up server-level firewall rules


Server-level IP firewall rules apply to all databases within the same server.
To set up a server-level firewall rule:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.

NOTE
Be sure to copy your fully qualified server name (such as yourserver.database.windows.net) for use later in the
tutorial.

2. On the Over view page, select Set ser ver firewall . The Firewall settings page for the server opens.
a. Select Add client IP on the toolbar to add your current IP address to a new firewall rule. The rule
can open port 1433 for a single IP address or a range of IP addresses. Select Save .

b. Select OK and close the Firewall settings page.


You can now connect to any database in the server with the specified IP address or IP address range.
Setup database firewall rules
Database-level firewall rules only apply to individual databases. The database will retain these rules during a
server failover. Database-level firewall rules can only be configured using Transact-SQL (T-SQL) statements, and
only after you've configured a server-level firewall rule.
To set up a database-level firewall rule:
1. Connect to the database, for example using SQL Server Management Studio.
2. In Object Explorer , right-click the database and select New Quer y .
3. In the query window, add this statement and modify the IP address to your public IP address:

EXECUTE sp_set_database_firewall_rule N'Example DB Rule','0.0.0.4','0.0.0.4';

4. On the toolbar, select Execute to create the firewall rule.

NOTE
You can also create a server-level firewall rule in SSMS by using the sp_set_firewall_rule command, though you must be
connected to the master database.

Create an Azure AD admin


Make sure you're using the appropriate Azure Active Directory (AD) managed domain. To select the AD domain,
use the upper-right corner of the Azure portal. This process confirms the same subscription is used for both
Azure AD and the logical SQL server hosting your database or data warehouse.

To set the Azure AD administrator:


1. In the Azure portal, on the SQL ser ver page, select Active Director y admin . Next select Set admin .
IMPORTANT
You need to be a "Global Administrator" to perform this task.

2. On the Add admin page, search and select the AD user or group and choose Select . All members and
groups of your Active Directory are listed, and entries grayed out are not supported as Azure AD
administrators. See Azure AD features and limitations.

IMPORTANT
Azure role-based access control (Azure RBAC) only applies to the portal and isn't propagated to SQL Server.

3. At the top of the Active Director y admin page, select Save .


The process of changing an administrator may take several minutes. The new administrator will appear in
the Active Director y admin box.
NOTE
When setting an Azure AD admin, the new admin name (user or group) cannot exist as a SQL Server login or user in the
master database. If present, the setup will fail and roll back changes, indicating that such an admin name already exists.
Since the SQL Server login or user is not part of Azure AD, any effort to connect the user using Azure AD authentication
fails.

For information about configuring Azure AD, see:


Integrate your on-premises identities with Azure AD
Add your own domain name to Azure AD
Microsoft Azure now supports federation with Windows Server AD
Administer your Azure AD directory
Manage Azure AD using PowerShell
Hybrid identity required ports and protocols

Manage database access


Manage database access by adding users to the database, or allowing user access with secure connection
strings. Connection strings are useful for external applications. To learn more, see Manage logins and user
accounts and AD authentication.
To add users, choose the database authentication type:
SQL authentication , use a username and password for logins and are only valid in the context of a
specific database within the server
Azure AD authentication , use identities managed by Azure AD
SQL authentication
To add a user with SQL authentication:
1. Connect to the database, for example using SQL Server Management Studio.
2. In Object Explorer , right-click the database and choose New Quer y .
3. In the query window, enter the following command:

CREATE USER ApplicationUser WITH PASSWORD = 'YourStrongPassword1';

4. On the toolbar, select Execute to create the user.


5. By default, the user can connect to the database, but has no permissions to read or write data. To grant
these permissions, execute the following commands in a new query window:

ALTER ROLE db_datareader ADD MEMBER ApplicationUser;


ALTER ROLE db_datawriter ADD MEMBER ApplicationUser;

NOTE
Create non-administrator accounts at the database level, unless they need to execute administrator tasks like creating
new users.

Azure AD authentication
Azure Active Directory authentication requires that database users are created as contained. A contained
database user maps to an identity in the Azure AD directory associated with the database and has no login in
the master database. The Azure AD identity can either be for an individual user or a group. For more
information, see Contained database users, make your database portable and review the Azure AD tutorial on
how to authenticate using Azure AD.

NOTE
Database users (excluding administrators) cannot be created using the Azure portal. Azure roles do not propagate to SQL
servers, databases, or data warehouses. They are only used to manage Azure resources and do not apply to database
permissions.
For example, the SQL Server Contributor role does not grant access to connect to a database or data warehouse. This
permission must be granted within the database using T-SQL statements.

IMPORTANT
Special characters like colon : or ampersand & are not supported in user names in the T-SQL CREATE LOGIN and
CREATE USER statements.

To add a user with Azure AD authentication:


1. Connect to your server in Azure using an Azure AD account with at least the ALTER ANY USER
permission.
2. In Object Explorer , right-click the database and select New Quer y .
3. In the query window, enter the following command and modify <Azure_AD_principal_name> to the
principal name of the Azure AD user or the display name of the Azure AD group:

CREATE USER [<Azure_AD_principal_name>] FROM EXTERNAL PROVIDER;

NOTE
Azure AD users are marked in the database metadata with type E (EXTERNAL_USER) and type X (EXTERNAL_GROUPS)
for groups. For more information, see sys.database_principals.

Secure connection strings


To ensure a secure, encrypted connection between the client application and SQL Database, a connection string
must be configured to:
Request an encrypted connection
Not trust the server certificate
The connection is established using Transport Layer Security (TLS) and reduces the risk of a man-in-the-middle
attack. Connection strings are available per database and are pre-configured to support client drivers such as
ADO.NET, JDBC, ODBC, and PHP. For information about TLS and connectivity, see TLS considerations.
To copy a secure connection string:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. On the Over view page, select Show database connection strings .
3. Select a driver tab and copy the complete connection string.

Enable security features


Azure SQL Database provides security features that are accessed using the Azure portal. These features are
available for both the database and server, except for data masking, which is only available on the database. To
learn more, see Microsoft Defender for SQL, Auditing, Dynamic data masking, and Transparent data encryption.
Microsoft Defender for SQL
The Microsoft Defender for SQL feature detects potential threats as they occur and provides security alerts on
anomalous activities. Users can explore these suspicious events using the auditing feature, and determine if the
event was to access, breach, or exploit data in the database. Users are also provided a security overview that
includes a vulnerability assessment and the data discovery and classification tool.

NOTE
An example threat is SQL injection, a process where attackers inject malicious SQL into application inputs. An application
can then unknowingly execute the malicious SQL and allow attackers access to breach or modify data in the database.

To enable Microsoft Defender for SQL:


1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. On the Over view page, select the Ser ver name link. The server page will open.
3. On the SQL ser ver page, find the Security section and select Defender for Cloud .
a. Select ON under Microsoft Defender for SQL to enable the feature. Choose a storage account
for saving vulnerability assessment results. Then select Save .
You can also configure emails to receive security alerts, storage details, and threat detection types.
4. Return to the SQL databases page of your database and select Defender for Cloud under the
Security section. Here you'll find various security indicators available for the database.

If anomalous activities are detected, you receive an email with information on the event. This includes the nature
of the activity, database, server, event time, possible causes, and recommended actions to investigate and
mitigate the potential threat. If such an email is received, select the Azure SQL Auditing Log link to launch the
Azure portal and show relevant auditing records for the time of the event.
Auditing
The auditing feature tracks database events and writes events to an audit log in either Azure storage, Azure
Monitor logs, or to an event hub. Auditing helps maintain regulatory compliance, understand database activity,
and gain insight into discrepancies and anomalies that could indicate potential security violations.
To enable auditing:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. In the Security section, select Auditing .
3. Under Auditing settings, set the following values:
a. Set Auditing to ON .
b. Select Audit log destination as any of the following:
Storage , an Azure storage account where event logs are saved and can be downloaded as
.xel files

TIP
Use the same storage account for all audited databases to get the most from auditing report
templates.

Log Analytics , which automatically stores events for query or further analysis

NOTE
A Log Analytics workspace is required to support advanced features such as analytics, custom
alert rules, and Excel or Power BI exports. Without a workspace, only the query editor is available.
Event Hub , which allows events to be routed for use in other applications
c. Select Save .

4. Now you can select View audit logs to view database events data.

IMPORTANT
See SQL Database auditing on how to further customize audit events using PowerShell or REST API.

Dynamic data masking


The data masking feature will automatically hide sensitive data in your database.
To enable data masking:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. In the Security section, select Dynamic Data Masking .
3. Under Dynamic data masking settings, select Add mask to add a masking rule. Azure will
automatically populate available database schemas, tables, and columns to choose from.

4. Select Save . The selected information is now masked for privacy.

Transparent data encryption


The encryption feature automatically encrypts your data at rest, and requires no changes to applications
accessing the encrypted database. For new databases, encryption is on by default. You can also encrypt data
using SSMS and the Always encrypted feature.
To enable or verify encryption:
1. In the Azure portal, select SQL databases from the left-hand menu, and select your database on the
SQL databases page.
2. In the Security section, select Transparent data encr yption .
3. If necessary, set Data encr yption to ON . Select Save .
NOTE
To view encryption status, connect to the database using SSMS and query the encryption_state column of the
sys.dm_database_encryption_keys view. A state of 3 indicates the database is encrypted.

NOTE
Some items considered customer content, such as table names, object names, and index names, may be transmitted in
log files for support and troubleshooting by Microsoft.

Next steps
In this tutorial, you've learned to improve the security of your database with just a few simple steps. You learned
how to:
Create server-level and database-level firewall rules
Configure an Azure Active Directory (AD) administrator
Manage user access with SQL authentication, Azure AD authentication, and secure connection strings
Enable security features, such as Microsoft Defender for SQL, auditing, data masking, and encryption
Advance to the next tutorial to learn how to implement geo-distribution.
Implement a geo-distributed database
Tutorial: Create Azure AD users using Azure AD
applications
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article takes you through the process of creating Azure AD users in Azure SQL Database, using Azure
service principals (Azure AD applications). This functionality already exists in Azure SQL Managed Instance, but
is now being introduced in Azure SQL Database. To support this scenario, an Azure AD Identity must be
generated and assigned to the Azure SQL logical server.
For more information on Azure AD authentication for Azure SQL, see the article Use Azure Active Directory
authentication.
In this tutorial, you learn how to:
Assign an identity to the Azure SQL logical server
Assign Directory Readers permission to the SQL logical server identity
Create a service principal (an Azure AD application) in Azure AD
Create a service principal user in Azure SQL Database
Create a different Azure AD user in SQL Database using an Azure AD service principal user

Prerequisites
An existing Azure SQL Database deployment. We assume you have a working SQL Database for this tutorial.
Access to an already existing Azure Active Directory.
Az.Sql 2.9.0 module or higher is needed when using PowerShell to set up an individual Azure AD application
as Azure AD admin for Azure SQL. Ensure you are upgraded to the latest module.

Assign an identity to the Azure SQL logical server


1. Connect to your Azure Active Directory. You will need to find your Tenant ID. This can be found by going
to the Azure portal, and going to your Azure Active Director y resource. In the Over view pane, you
should see your Tenant ID . Run the following PowerShell command:
Replace <TenantId> with your Tenant ID .

Connect-AzAccount -Tenant <TenantId>

Record the TenantId for future use in this tutorial.


2. Generate and assign an Azure AD Identity to the Azure SQL logical server. Execute the following
PowerShell command:
Replace <resource group> and with your resources. If your server name is
<server name>
myserver.database.windows.net , replace <server name> with myserver .

Set-AzSqlServer -ResourceGroupName <resource group> -ServerName <server name> -AssignIdentity

For more information, see the Set-AzSqlServer command.


IMPORTANT
If an Azure AD Identity is set up for the Azure SQL logical server, the Director y Readers permission must be
granted to the identity. We will walk through this step in following section. Do not skip this step as Azure AD
authentication will stop working.
With Microsoft Graph support for Azure SQL, the Directory Readers role can be replaced with using lower level
permissions. For more information, see User-assigned managed identity in Azure AD for Azure SQL.
If a system-assigned or user-assigned managed identity is used as the server or instance identity, deleting the
identity will result in the server or instance inability to access Microsoft Graph. Azure AD authentication and other
functions will fail. To restore Azure AD functionality, a new SMI or UMI must be assigned to the server with
appropriate permissions.

If you used the New-AzSqlServer command with the parameter AssignIdentity for a new SQL server
creation in the past, you'll need to execute the Set-AzSqlServer command afterwards as a separate
command to enable this property in the Azure fabric.
3. Check the server identity was successfully assigned. Execute the following PowerShell command:
Replace <resource group> and with your resources. If your server name is
<server name>
myserver.database.windows.net , replace <server name> with myserver .

$xyz = Get-AzSqlServer -ResourceGroupName <resource group> -ServerName <server name>


$xyz.identity

Your output should show you PrincipalId , Type , and TenantId . The identity assigned is the
PrincipalId .

4. You can also check the identity by going to the Azure portal.
Under the Azure Active Director y resource, go to Enterprise applications . Type in the name of
your SQL logical server. You will see that it has an Object ID attached to the resource.

Assign Directory Readers permission to the SQL logical server identity


To allow the Azure AD assigned identity to work properly for Azure SQL, the Azure AD Directory Readers
permission must be granted to the server identity.
To grant this required permission, run the following script.

NOTE
This script must be executed by an Azure AD Global Administrator or a Privileged Roles Administrator .
You can assign the Directory Readers role to a group in Azure AD. The group owners can then add the managed
identity as a member of this group, which would bypass the need for a Global Administrator or
Privileged Roles Administrator to grant the Directory Readers role. For more information on this feature, see
Directory Readers role in Azure Active Directory for Azure SQL.
Replace <TenantId> with your TenantId gathered earlier.
Replace <server name> with your SQL logical server name. If your server name is
myserver.database.windows.net , replace <server name> with myserver .

# This script grants Azure "Directory Readers" permission to a Service Principal representing the Azure SQL
logical server
# It can be executed only by a "Global Administrator" or "Privileged Roles Administrator" type of user.
# To check if the "Directory Readers" permission was granted, execute this script again

Import-Module AzureAD
Connect-AzureAD -TenantId "<TenantId>" #Enter your actual TenantId
$AssignIdentityName = "<server name>" #Enter Azure SQL logical server name

# Get Azure AD role "Directory Users" and create if it doesn't exist


$roleName = "Directory Readers"
$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq $roleName}
if ($role -eq $null) {
# Instantiate an instance of the role template
$roleTemplate = Get-AzureADDirectoryRoleTemplate | Where-Object {$_.displayName -eq $roleName}
Enable-AzureADDirectoryRole -RoleTemplateId $roleTemplate.ObjectId
$role = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq $roleName}
}

# Get service principal for server


$roleMember = Get-AzureADServicePrincipal -SearchString $AssignIdentityName
$roleMember.Count
if ($roleMember -eq $null) {
Write-Output "Error: No Service Principals with name '$($AssignIdentityName)', make sure that
AssignIdentityName parameter was entered correctly."
exit
}

if (-not ($roleMember.Count -eq 1)) {


Write-Output "Error: More than one service principal with name pattern '$($AssignIdentityName)'"
Write-Output "Dumping selected service principals...."
$roleMember
exit
}

# Check if service principal is already member of readers role


$allDirReaders = Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId
$selDirReader = $allDirReaders | where{$_.ObjectId -match $roleMember.ObjectId}

if ($selDirReader -eq $null) {


# Add principal to readers role
Write-Output "Adding service principal '$($AssignIdentityName)' to 'Directory Readers' role'..."
Add-AzureADDirectoryRoleMember -ObjectId $role.ObjectId -RefObjectId $roleMember.ObjectId
Write-Output "'$($AssignIdentityName)' service principal added to 'Directory Readers' role'..."

#Write-Output "Dumping service principal '$($AssignIdentityName)':"


#$allDirReaders = Get-AzureADDirectoryRoleMember -ObjectId $role.ObjectId
#$allDirReaders | where{$_.ObjectId -match $roleMember.ObjectId}
} else {
Write-Output "Service principal '$($AssignIdentityName)' is already member of 'Directory Readers'
role'."
}

NOTE
The output from this above script will indicate if the Directory Readers permission was granted to the identity. You can re-
run the script if you are unsure if the permission was granted.

For a similar approach on how to set the Director y Readers permission for SQL Managed Instance, see
Provision Azure AD admin (SQL Managed Instance).

Create a service principal (an Azure AD application) in Azure AD


Register your application if you have not already done so. To register an app, you need to either be an Azure AD
admin or a user assigned the Azure AD Application Developer role. For more information about assigning roles,
see Assign administrator and non-administrator roles to users with Azure Active Directory.
Completing an app registration generates and displays an Application ID .
To register your application:
1. In the Azure portal, select Azure Active Director y > App registrations > New registration .

After the app registration is created, the Application ID value is generated and displayed.
2. You'll also need to create a client secret for signing in. Follow the guide here to upload a certificate or
create a secret for signing in.
3. Record the following from your application registration. It should be available from your Over view pane:
Application ID
Tenant ID - This should be the same as before
In this tutorial, we'll be using AppSP as our main service principal, and myapp as the second service principal
user that will be created in Azure SQL by AppSP. You'll need to create two applications, AppSP and myapp.
For more information on how to create an Azure AD application, see the article How to: Use the portal to create
an Azure AD application and service principal that can access resources.

Create the service principal user in Azure SQL Database


Once a service principal is created in Azure AD, create the user in SQL Database. You'll need to connect to your
SQL Database with a valid login with permissions to create users in the database.
1. Create the user AppSP in the SQL Database using the following T-SQL command:

CREATE USER [AppSP] FROM EXTERNAL PROVIDER


GO

2. Grant db_owner permission to AppSP, which allows the user to create other Azure AD users in the
database.

EXEC sp_addrolemember 'db_owner', [AppSP]


GO

For more information, see sp_addrolemember


Alternatively, ALTER ANY USER permission can be granted instead of giving the db_owner role. This will
allow the service principal to add other Azure AD users.

GRANT ALTER ANY USER TO [AppSp]


GO

NOTE
The above setting is not required when AppSP is set as an Azure AD admin for the server. To set the service
principal as an AD admin for the SQL logical server, you can use the Azure portal, PowerShell, or Azure CLI
commands. For more information, see Provision Azure AD admin (SQL Database).
Create an Azure AD user in SQL Database using an Azure AD service
principal
IMPORTANT
The service principal used to login to SQL Database must have a client secret. If it doesn’t have one, follow step 2 of
Create a service principal (an Azure AD application) in Azure AD. This client secret needs to be added as an input
parameter in the script below.

1. Use the following script to create an Azure AD service principal user myapp using the service principal
AppSP.
Replace <TenantId>with your TenantId gathered earlier.
Replace <ClientId> with your ClientId gathered earlier.
Replace <ClientSecret> with your client secret created earlier.
Replace <server name> with your SQL logical server name. If your server name is
myserver.database.windows.net , replace <server name> with myserver .
Replace <database name> with your SQL Database name.

# PowerShell script for creating a new SQL user called myapp using application AppSP with secret
# AppSP is part of an Azure AD admin for the Azure SQL server below

# Download latest MSAL - https://www.powershellgallery.com/packages/MSAL.PS


Import-Module MSAL.PS

$tenantId = "<TenantId>" # tenantID (Azure Directory ID) were AppSP resides


$clientId = "<ClientId>" # AppID also ClientID for AppSP
$clientSecret = "<ClientSecret>" # Client secret for AppSP
$scopes = "https://database.windows.net/.default" # The end-point

$result = Get-MsalToken -RedirectUri $uri -ClientId $clientId -ClientSecret (ConvertTo-SecureString


$clientSecret -AsPlainText -Force) -TenantId $tenantId -Scopes $scopes

$Tok = $result.AccessToken
#Write-host "token"
$Tok

$SQLServerName = "<server name>" # Azure SQL logical server name


$DatabaseName = "<database name>" # Azure SQL database name

Write-Host "Create SQL connection string"


$conn = New-Object System.Data.SqlClient.SQLConnection
$conn.ConnectionString = "Data Source=$SQLServerName.database.windows.net;Initial
Catalog=$DatabaseName;Connect Timeout=30"
$conn.AccessToken = $Tok

Write-host "Connect to database and execute SQL script"


$conn.Open()
$ddlstmt = 'CREATE USER [myapp] FROM EXTERNAL PROVIDER;'
Write-host " "
Write-host "SQL DDL command"
$ddlstmt
$command = New-Object -TypeName System.Data.SqlClient.SqlCommand($ddlstmt, $conn)

Write-host "results"
$command.ExecuteNonQuery()
$conn.Close()

Alternatively, you can use the code sample in the blog, Azure AD Service Principal authentication to SQL
DB - Code Sample. Modify the script to execute a DDL statement
CREATE USER [myapp] FROM EXTERNAL PROVIDER . The same script can be used to create a regular Azure AD
user or a group in SQL Database.
2. Check if the user myapp exists in the database by executing the following command:

SELECT name, type, type_desc, CAST(CAST(sid as varbinary(16)) as uniqueidentifier) as appId from


sys.database_principals WHERE name = 'myapp'
GO

You should see a similar output:

name type type_desc appId


myapp E EXTERNAL_USER 6d228f48-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Next steps
Azure Active Directory service principal with Azure SQL
What are managed identities for Azure resources?
How to use managed identities for App Service and Azure Functions
Azure AD Service Principal authentication to SQL DB - Code Sample
Application and service principal objects in Azure Active Directory
Create an Azure service principal with Azure PowerShell
Directory Readers role in Azure Active Directory for Azure SQL
Rotate the Transparent Data Encryption (TDE)
protector
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


This article describes key rotation for a server using a TDE protector from Azure Key Vault. Rotating the logical
TDE Protector for a server means switching to a new asymmetric key that protects the databases on the server.
Key rotation is an online operation and should only take a few seconds to complete, because this only decrypts
and re-encrypts the database's data encryption key, not the entire database.

Important considerations when rotating the TDE Protector


When the TDE protector is changed/rotated, old backups of the database, including backed-up log files, are
not updated to use the latest TDE protector. To restore a backup encrypted with a TDE protector from Key
Vault, make sure that the key material is available to the target server. Therefore, we recommend that you
keep all the old versions of the TDE protector in Azure Key Vault (AKV), so database backups can be restored.
Even when switching from customer managed key (CMK) to service-managed key, keep all previously used
keys in AKV. This ensures database backups, including backed-up log files, can be restored with the TDE
protectors stored in AKV.
Apart from old backups, transaction log files might also require access to the older TDE Protector. To
determine if there are any remaining logs that still require the older key, after performing key rotation, use
the sys.dm_db_log_info dynamic management view (DMV). This DMV returns information on the virtual log
file (VLF) of the transantion log along with its encryption key thumbprint of the VLF.
Older keys need to be kept in AKV and available to the server based on the backup retention period
configured as back of backup retention policies on the database. This helps ensure any Long Term Retention
(LTR) backups on the server can still be restored using the older keys.

NOTE
A paused dedicated SQL pool in Azure Synapse Analytics must be resumed before key rotations.

IMPORTANT
Do not delete previous versions of the key after a rollover. When keys are rolled over, some data is still encrypted with the
previous keys, such as older database backups, backed-up log files and transaction log files.

NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.

Prerequisites
This how-to guide assumes that you are already using a key from Azure Key Vault as the TDE protector for
Azure SQL Database or Azure Synapse Analytics. See Transparent Data Encryption with BYOK Support.
You must have Azure PowerShell installed and running.
[Recommended but optional] Create the key material for the TDE protector in a hardware security module
(HSM) or local key store first, and import the key material to Azure Key Vault. Follow the instructions for
using a hardware security module (HSM) and Key Vault to learn more.

PowerShell
The Azure CLI

For Az module installation instructions, see Install Azure PowerShell. For specific cmdlets, see AzureRM.Sql.

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.

Manual key rotation


Manual key rotation uses the following commands to add a completely new key, which could be under a new
key name or even another key vault. Using this approach supports adding the same key to different key vaults
to support high-availability and geo-dr scenarios.

NOTE
The combined length for the key vault name and key name cannot exceed 94 characters.

PowerShell
The Azure CLI

Use the Add-AzKeyVaultKey, Add-AzSqlServerKeyVaultKey, and Set-


AzSqlServerTransparentDataEncryptionProtector cmdlets.

# add a new key to Key Vault


Add-AzKeyVaultKey -VaultName <keyVaultName> -Name <keyVaultKeyName> -Destination <hardwareOrSoftware>

# add the new key from Key Vault to the server


Add-AzSqlServerKeyVaultKey -KeyId <keyVaultKeyId> -ServerName <logicalServerName> -ResourceGroup
<SQLDatabaseResourceGroupName>

# set the key as the TDE protector for all resources under the server
Set-AzSqlServerTransparentDataEncryptionProtector -Type AzureKeyVault -KeyId <keyVaultKeyId> `
-ServerName <logicalServerName> -ResourceGroup <SQLDatabaseResourceGroupName>

Switch TDE protector mode


PowerShell
The Azure CLI

To switch the TDE protector from Microsoft-managed to BYOK mode, use the Set-
AzSqlServerTransparentDataEncryptionProtector cmdlet.
Set-AzSqlServerTransparentDataEncryptionProtector -Type AzureKeyVault `
-KeyId <keyVaultKeyId> -ServerName <logicalServerName> -ResourceGroup
<SQLDatabaseResourceGroupName>

To switch the TDE protector from BYOK mode to Microsoft-managed, use the Set-
AzSqlServerTransparentDataEncryptionProtector cmdlet.

Set-AzSqlServerTransparentDataEncryptionProtector -Type ServiceManaged `


-ServerName <logicalServerName> -ResourceGroup <SQLDatabaseResourceGroupName>

Next steps
In case of a security risk, learn how to remove a potentially compromised TDE protector: Remove a
potentially compromised key.
Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: Turn on TDE using
your own key from Key Vault using PowerShell.
Remove a Transparent Data Encryption (TDE)
protector using PowerShell
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


This topic describes how to respond to a potentially compromised TDE protect for Azure SQL Database or Azure
Synapse Analytics that is using TDE with customer-managed keys in Azure Key Vault - Bring Your Own Key
(BYOK) support. To learn more about BYOK support for TDE, see the overview page.
Cau t i on

The procedures outlined in this article should only be done in extreme cases or in test environments. Review the
steps carefully, as deleting actively used TDE protectors from Azure Key Vault will result in database becoming
unavailable .
If a key is ever suspected to be compromised, such that a service or user had unauthorized access to the key, it's
best to delete the key.
Keep in mind that once the TDE protector is deleted in Key Vault, in up to 10 minutes, all encrypted databases
will start denying all connections with the corresponding error message and change its state to Inaccessible.
This how-to guide goes over the approach to render databases inaccessible after a compromised incident
response.

NOTE
This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL
pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse
workspaces, see Azure Synapse Analytics encryption.

Prerequisites
You must have an Azure subscription and be an administrator on that subscription
You must have Azure PowerShell installed and running.
This how-to guide assumes that you are already using a key from Azure Key Vault as the TDE protector for an
Azure SQL Database or Azure Synapse. See Transparent Data Encryption with BYOK Support to learn more.

PowerShell
The Azure CLI

For Az module installation instructions, see Install Azure PowerShell. For specific cmdlets, see AzureRM.Sql.

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.
Check TDE Protector thumbprints
The following steps outline how to check the TDE Protector thumbprints still in use by Virtual Log Files (VLF) of
a given database. The thumbprint of the current TDE protector of the database, and the database ID can be
found by running:

SELECT [database_id],
[encryption_state],
[encryptor_type], /*asymmetric key means AKV, certificate means service-managed keys*/
[encryptor_thumbprint]
FROM [sys].[dm_database_encryption_keys]

The following query returns the VLFs and the TDE Protector respective thumbprints in use. Each different
thumbprint refers to different key in Azure Key Vault (AKV):

SELECT * FROM sys.dm_db_log_info (database_id)

Alternatively, you can use PowerShell or the Azure CLI:


PowerShell
The Azure CLI

The PowerShell command Get-AzureRmSqlSer verKeyVaultKey provides the thumbprint of the TDE Protector
used in the query, so you can see which keys to keep and which keys to delete in AKV. Only keys no longer used
by the database can be safely deleted from Azure Key Vault.

Keep encrypted resources accessible


PowerShell
The Azure CLI

1. Create a new key in Key Vault. Make sure this new key is created in a separate key vault from the
potentially compromised TDE protector, since access control is provisioned on a vault level.
2. Add the new key to the server using the Add-AzSqlServerKeyVaultKey and Set-
AzSqlServerTransparentDataEncryptionProtector cmdlets and update it as the server's new TDE protector.

# add the key from Key Vault to the server


Add-AzSqlServerKeyVaultKey -ResourceGroupName <SQLDatabaseResourceGroupName> -ServerName
<LogicalServerName> -KeyId <KeyVaultKeyId>

# set the key as the TDE protector for all resources under the server
Set-AzSqlServerTransparentDataEncryptionProtector -ResourceGroupName <SQLDatabaseResourceGroupName> `
-ServerName <LogicalServerName> -Type AzureKeyVault -KeyId <KeyVaultKeyId>

3. Make sure the server and any replicas have updated to the new TDE protector using the Get-
AzSqlServerTransparentDataEncryptionProtector cmdlet.

NOTE
It may take a few minutes for the new TDE protector to propagate to all databases and secondary databases
under the server.
Get-AzSqlServerTransparentDataEncryptionProtector -ServerName <LogicalServerName> -ResourceGroupName
<SQLDatabaseResourceGroupName>

4. Take a backup of the new key in Key Vault.

# -OutputFile parameter is optional; if removed, a file name is automatically generated.


Backup-AzKeyVaultKey -VaultName <KeyVaultName> -Name <KeyVaultKeyName> -OutputFile
<DesiredBackupFilePath>

5. Delete the compromised key from Key Vault using the Remove-AzKeyVaultKey cmdlet.

Remove-AzKeyVaultKey -VaultName <KeyVaultName> -Name <KeyVaultKeyName>

6. To restore a key to Key Vault in the future using the Restore-AzKeyVaultKey cmdlet:

Restore-AzKeyVaultKey -VaultName <KeyVaultName> -InputFile <BackupFilePath>

Make encrypted resources inaccessible


1. Drop the databases that are being encrypted by the potentially compromised key.
The database and log files are automatically backed up, so a point-in-time restore of the database can be
done at any point (as long as you provide the key). The databases must be dropped before deletion of an
active TDE protector to prevent potential data loss of up to 10 minutes of the most recent transactions.
2. Back up the key material of the TDE protector in Key Vault.
3. Remove the potentially compromised key from Key Vault

NOTE
It may take around 10 minutes for any permission changes to take effect for the key vault. This includes revoking access
permissions to the TDE protector in AKV, and users within this time frame may still have access permissions.

Next steps
Learn how to rotate the TDE protector of a server to comply with security requirements: Rotate the
Transparent Data Encryption protector Using PowerShell
Get started with Bring Your Own Key support for TDE: Turn on TDE using your own key from Key Vault using
PowerShell
Tutorial: Set up SQL Data Sync between databases
in Azure SQL Database and SQL Server
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you learn how to set up SQL Data Sync by creating a sync group that contains both Azure SQL
Database and SQL Server instances. The sync group is custom configured and synchronizes on the schedule you
set.
The tutorial assumes you have at least some prior experience with SQL Database and SQL Server.
For an overview of SQL Data Sync, see Sync data across cloud and on-premises databases with SQL Data Sync.
For PowerShell examples on how to configure SQL Data Sync, see How to sync between databases in SQL
Database or between databases in Azure SQL Database and SQL Server

IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance at this time.

Create sync group


1. Go to the Azure portal to find your database in SQL Database. Search for and select SQL databases .

2. Select the database you want to use as the hub database for Data Sync.
NOTE
The hub database is a sync topology's central endpoint, in which a sync group has multiple database endpoints.
All other member databases with endpoints in the sync group, sync with the hub database.

3. On the SQL database menu for the selected database, select Sync to other databases .
4. On the Sync to other databases page, select New Sync Group . The New sync group page opens
with Create sync group .
On the Create Data Sync Group page, change the following settings:

SET T IN G DESC RIP T IO N

Sync Group Name Enter a name for the new sync group. This name is
distinct from the name of the database itself.

Sync Metadata Database Choose to create a database (recommended) or to use


an existing database.

If you choose New database , select Create new


database. Then on the SQL Database page, name and
configure the new database and select OK .

If you choose Use existing database , select the


database from the list.

Automatic Sync Select On or Off .

If you choose On , enter a number and select Seconds ,


Minutes , Hours , or Days in the Sync Frequency
section.
The first sync begins after the selected interval period
elapses from the time the configuration is saved.
SET T IN G DESC RIP T IO N

Conflict Resolution Select Hub win or Member win .

Hub win means when conflicts occur, data in the hub


database overwrites conflicting data in the member
database.

Member win means when conflicts occur, data in the


member database overwrites conflicting data in the hub
database.

Use private link Choose a service managed private endpoint to establish


a secure connection between the sync service and the
hub database.

NOTE
Microsoft recommends to create a new, empty database for use as the Sync Metadata Database . Data Sync
creates tables in this database and runs a frequent workload. This database is shared as the Sync Metadata
Database for all sync groups in a selected region and subscription. You can't change the database or its name
without removing all sync groups and sync agents in the region. Additionally, an Elastic jobs database cannot be
used as the SQL Data Sync Metadata database and vice versa.

Select OK and wait for the sync group to be created and deployed.
5. On the New Sync Group page, if you selected Use private link , you will need to approve the private
endpoint connection. The link in the info message will take you to the private endpoint connections
experience where you can approve the connection.

NOTE
The private links for the sync group and the sync members need to be created, approved, and disabled separately.

Add sync members


After the new sync group is created and deployed, open the sync group and access the Databases page, where
you will select sync members.
NOTE
To update or insert the username and password to your hub database, go to the Hub Database section in the Select
sync members page.

To add a database in Azure SQL Database


In the Select sync members section, optionally add a database in Azure SQL Database to the sync group by
selecting Add an Azure Database . The Configure Azure Database page opens.

On the Configure Azure SQL Database page, change the following settings:

SET T IN G DESC RIP T IO N

Sync Member Name Provide a name for the new sync member. This name is
distinct from the database name itself.
SET T IN G DESC RIP T IO N

Subscription Select the associated Azure subscription for billing purposes.

Azure SQL Ser ver Select the existing server.

Azure SQL Database Select the existing database in SQL Database.

Sync Directions Select Bi-directional Sync, To the Hub , or From the


Hub .

Username and Password Enter the existing credentials for the server on which the
member database is located. Don't enter new credentials in
this section.

Use private link Choose a service managed private endpoint to establish a


secure connection between the sync service and the
member database.

Select OK and wait for the new sync member to be created and deployed.

To add a SQL Server database


In the Member Database section, optionally add a SQL Server database to the sync group by selecting Add an
On-Premises Database . The Configure On-Premises page opens where you can do the following things:
1. Select Choose the Sync Agent Gateway . The Select Sync Agent page opens.

2. On the Choose the Sync Agent page, choose whether to use an existing agent or create an agent.
If you choose Existing agents , select the existing agent from the list.
If you choose Create a new agent , do the following things:
a. Download the data sync agent from the link provided and install it on the computer where the SQL
Server is located. You can also download the agent directly from Azure SQL Data Sync Agent.
IMPORTANT
You have to open outbound TCP port 1433 in the firewall to let the client agent communicate with the
server.

b. Enter a name for the agent.


c. Select Create and Generate Key and copy the agent key to the clipboard.
d. Select OK to close the Select Sync Agent page.
3. On the SQL Server computer, locate and run the Client Sync Agent app.

a. In the sync agent app, select Submit Agent Key . The Sync Metadata Database Configuration
dialog box opens.
b. In the Sync Metadata Database Configuration dialog box, paste in the agent key copied from
the Azure portal. Also provide the existing credentials for the server on which the metadata
database is located. (If you created a metadata database, this database is on the same server as the
hub database.) Select OK and wait for the configuration to finish.

NOTE
If you get a firewall error, create a firewall rule on Azure to allow incoming traffic from the SQL Server
computer. You can create the rule manually in the portal or in SQL Server Management Studio (SSMS). In
SSMS, connect to the hub database on Azure by entering its name as
<hub_database_name>.database.windows.net.

c. Select Register to register a SQL Server database with the agent. The SQL Ser ver
Configuration dialog box opens.
d. In the SQL Ser ver Configuration dialog box, choose to connect using SQL Server
authentication or Windows authentication. If you choose SQL Server authentication, enter the
existing credentials. Provide the SQL Server name and the name of the database that you want to
sync and select Test connection to test your settings. Then select Save and the registered
database appears in the list.

e. Close the Client Sync Agent app.


4. In the portal, on the Configure On-Premises page, select Select the Database .
5. On the Select Database page, in the Sync Member Name field, provide a name for the new sync
member. This name is distinct from the name of the database itself. Select the database from the list. In
the Sync Directions field, select Bi-directional Sync , To the Hub , or From the Hub .
6. Select OK to close the Select Database page. Then select OK to close the Configure On-Premises
page and wait for the new sync member to be created and deployed. Finally, select OK to close the Select
sync members page.

NOTE
To connect to SQL Data Sync and the local agent, add your user name to the role DataSync_Executor. Data Sync creates
this role on the SQL Server instance.

Configure sync group


After the new sync group members are created and deployed, go to the Tables section in the Database Sync
Group page.
1. On the Tables page, select a database from the list of sync group members and select Refresh schema .
Please expect a few minutes delay in refresh schema, the delay might be a few minutes longer if using
private link.
2. From the list, select the tables you want to sync. By default, all columns are selected, so disable the
checkbox for the columns you don't want to sync. Be sure to leave the primary key column selected.
3. Select Save .
4. By default, databases are not synced until scheduled or manually run. To run a manual sync, navigate to
your database in SQL Database in the Azure portal, select Sync to other databases , and select the sync
group. The Data Sync page opens. Select Sync .
FAQ
Does SQL Data Sync fully create tables?
If sync schema tables are missing in the destination database, SQL Data Sync creates them with the columns you
selected. However, this doesn't result in a full-fidelity schema for the following reasons:
Only columns you select are created in the destination table. Columns not selected are ignored.
Only selected column indexes are created in the destination table. For columns not selected, those indexes
are ignored.
Indexes on XML type columns aren't created.
CHECK constraints aren't created.
Triggers on the source tables aren't created.
Views and stored procedures aren't created.
Because of these limitations, we recommend the following things:
For production environments, create the full-fidelity schema yourself.
When experimenting with the service, use the auto-provisioning feature.
Why do I see tables I didn't create?
Data Sync creates additional tables in the database for change tracking. Don't delete these or Data Sync stops
working.
Is my data convergent after a sync?
Not necessarily. Take a sync group with a hub and three spokes (A, B, and C) where synchronizations are Hub to
A, Hub to B, and Hub to C. If a change is made to database A after the Hub to A sync, that change isn't written to
database B or database C until the next sync task.
How do I get schema changes into a sync group?
Make and propagate all schema changes manually.
1. Replicate the schema changes manually to the hub and to all sync members.
2. Update the sync schema.
For adding new tables and columns:
New tables and columns don't impact the current sync and Data Sync ignores them until they're added to the
sync schema. When adding new database objects, follow the sequence:
1. Add new tables or columns to the hub and to all sync members.
2. Add new tables or columns to the sync schema.
3. Begin inserting values into the new tables and columns.
For changing the data type of a column:
When you change the data type of an existing column, Data Sync continues to work as long as the new values fit
the original data type defined in the sync schema. For example, if you change the type in the source database
from int to bigint , Data Sync continues to work until you insert a value too large for the int data type. To
complete the change, replicate the schema change manually to the hub and to all sync members, then update
the sync schema.
How can I expor t and impor t a database with Data Sync?
After you export a database as a .bacpac file and import the file to create a database, do the following to use
Data Sync in the new database:
1. Clean up the Data Sync objects and additional tables on the new database by using this script. The script
deletes all the required Data Sync objects from the database.
2. Recreate the sync group with the new database. If you no longer need the old sync group, delete it.
Where can I find information on the client agent?
For frequently asked questions about the client agent, see Agent FAQ.
Is it necessar y to manually approve the link before I can star t using it?
Yes, you must manually approve the service managed private endpoint, in the Private endpoint connections
page of the Azure portal during the sync group deployment or by using PowerShell.
Why do I get a firewall error when the Sync job is provisioning my Azure database?
This may happen because Azure resources are not allowed to access your server. Ensure that the firewall on the
Azure database has "Allow Azure services and resources to access this server” setting set to "Yes".

Next steps
Congratulations. You've created a sync group that includes both a SQL Database instance and a SQL Server
database.
For more info about SQL Data Sync, see:
Data Sync Agent for Azure SQL Data Sync
Best practices and How to troubleshoot issues with Azure SQL Data Sync
Monitor SQL Data Sync with Azure Monitor logs
Update the sync schema with Transact-SQL or PowerShell
For more info about SQL Database, see:
SQL Database Overview
Database Lifecycle Management
How to migrate your SQLite database to Azure
SQL Database serverless
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


For many people, SQLite provides their first experience of databases and SQL programming. Its inclusion in
many operating systems and popular applications makes SQLite one the most widely deployed and used
database engines in the world. And because it is likely the first database engine many people use, it can often
end up as a central part of projects or applications. In such cases where the project or application outgrows the
initial SQLite implementation, developers may need to migrate their data to a reliable, centralized data store.
Azure SQL Database Serverless is a compute tier for single databases that automatically scales compute based
on workload demand, and bills for the amount of compute used per second. The serverless compute tier also
automatically pauses databases during inactive periods when only storage is billed and automatically resumes
databases when activity returns.
Once you have followed the below steps, your database will be migrated into Azure SQL Database Serverless,
enabling you to make your database available to other users or applications in the cloud and only pay for what
you use, with minimal application code changes.

Prerequisites
An Azure Subscription
SQLite2 or SQLite3 database that you wish to migrate
A Windows environment
If you do not have a local Windows environment, you can use a Windows VM in Azure for the
migration. Move and make your SQLite database file available on the VM using Azure Files and
Storage Explorer.

Steps
1. Provision a new Azure SQL Database in the Serverless compute tier.
2. Ensure you have your SQLite database file available in your Windows environment. Install a SQLite ODBC
Driver if you do not already have one (there are many available in Open Source, for example,
http://www.ch-werner.de/sqliteodbc/).
3. Create a System DSN for the database. Ensure you use the Data Source Administrator application that
matches your system architecture (32-bit vs 64-bit). You can find which version you are running in your
system settings.
Open ODBC Data Source Administrator in your environment.
Click the system DSN tab and click "Add"
Select the SQLite ODBC connector you installed and give the connection a meaningful name, for
example, sqlitemigrationsource
Set the database name to the .db file
Save and exit
4. Download and install the self-hosted integration runtime. The easiest way to do this is the Express install
option, as detailed in the documentation. If you opt for a manual install, you will need to provide the
application with an authentication key, which can be located in your Data Factory instance by:
Starting up ADF (Author and Monitor from the service in the Azure portal)
Click the "Author" tab (Blue pencil) on the left
Click Connections (bottom left), then Integration runtimes
Add new Self-Hosted Integration Runtime, give it a name, select Option 2.
5. Create a new linked service for the source SQLite database in your Data Factory.
6. In Connections , under Linked Ser vice , click New .
7. Search for and select the "ODBC" connector
8. Give the linked service a meaningful name, for example, "sqlite_odbc". Select your integration runtime
from the "Connect via integration runtime" dropdown. Enter the below into the connection string,
replacing the Initial Catalog variable with the filepath for the .db file, and the DSN with the name of the
system DSN connection:

Connection string: Provider=MSDASQL.1;Persist Security Info=False;Mode=ReadWrite;Initial


Catalog=C:\sqlitemigrationsource.db;DSN=sqlitemigrationsource

9. Set the authentication type to Anonymous


10. Test the connection

11. Create another linked service for your Serverless SQL target. Select the database using the linked service
wizard, and provide the SQL authentication credentials.
12. Extract the CREATE TABLE statements from your SQLite database. You can do this by executing the below
Python script on your database file.

#!/usr/bin/python
import sqlite3
conn = sqlite3.connect("sqlitemigrationsource.db")
c = conn.cursor()

print("Starting extract job..")


with open('CreateTables.sql', 'w') as f:
for tabledetails in c.execute("SELECT * FROM sqlite_master WHERE type='table'"):
print("Extracting CREATE statement for " + (str(tabledetails[1])))
print(tabledetails)
f.write(str(tabledetails[4].replace('\n','') + ';\n'))
c.close()

13. Create the landing tables in your Serverless SQL target environment by copying the CREATE table
statements from the CreateTables.sql file and running the SQL statements in the Query Editor in the
Azure portal.
14. Return to the home screen of your Data Factory and click "Copy Data" to run through the job creation
wizard.
15. Select all tables from the source SQLite database using the check boxes, and map them to the target
tables in Azure SQL. Once the job has run, you have successfully migrated your data from SQLite to
Azure SQL!

Next steps
To get started, see Quickstart: Create a single database in Azure SQL Database using the Azure portal.
For resource limits, see Serverless compute tier resource limits.
Configure isolated access to a Hyperscale named
replica
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes the procedure to grant access to an Azure SQL Hyperscale named replica without granting
access to the primary replica or other named replicas. This scenario allows resource and security isolation of a
named replica - as the named replica will be running using its own compute node - and it is useful whenever
isolated read-only access to an Azure SQL Hyperscale database is needed. Isolated, in this context, means that
CPU and memory are not shared between the primary and the named replica, queries running on the named
replica do not use compute resources of the primary or of any other replicas, and principals accessing the
named replica cannot access other replicas, including the primary.

Create a login in the master database on the primary server


In the master database on the logical server hosting the primary database, execute the following to create a
new login. Use your own strong and unique password.

create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!';

Retrieve the SID hexadecimal value for the created login from the sys.sql_logins system view:

select sid from sys.sql_logins where name = 'third-party-login';

Disable the login. This will prevent this login from accessing any database on the server hosting the primary
replica.

alter login [third-party-login] disable;

Create a user in the primary read-write database


Once the login has been created, connect to the primary read-write replica of your database, for example
WideWorldImporters (you can find a sample script to restore it here: Restore Database in Azure SQL) and create
a database user for that login:

create user [third-party-user] from login [third-party-login];

As an optional step, once the database user has been created, you can drop the server login created in the
previous step if there are concerns about the login getting re-enabled in any way. Connect to the master
database on the logical server hosting the primary database, and execute the following:

drop login [third-party-login];

Create a named replica on a different logical server


Create a new Azure SQL logical server that will be used to isolate access to the named replica. Follow the
instructions available at Create and manage servers and single databases in Azure SQL Database. To create a
named replica, this server must be in the same Azure region as the server hosting the primary replica.
Using, for example, AZ CLI:

az sql server create -g MyResourceGroup -n MyNamedReplicaServer -l MyLocation --admin-user MyAdminUser --


admin-password MyStrongADM1NPassw0rd!

Then, create a named replica for the primary database on this server. For example, using AZ CLI:

az sql db replica create -g MyResourceGroup -n WideWorldImporters -s MyPrimaryServer --secondary-type Named


--partner-database WideWorldImporters_NR --partner-server MyNamedReplicaServer

Create a login in the master database on the named replica server


Connect to the master database on the logical server hosting the named replica, created in the previous step.
Add the login using the SID retrieved from the primary replica:

create login [third-party-login] with password = 'Just4STRONG_PAZzW0rd!', sid = 0x0...1234;

At this point, users and applications using third-party-login can connect to the named replica, but not to the
primary replica.

Grant object-level permissions within the database


Once you have set up login authentication as described, you can use regular GRANT , DENY and REVOKE
statements to manage authorization, or object-level permissions within the database. In these statements,
reference the name of the user you created in the database, or a database role that includes this user as a
member. Remember to execute these commands on the primary replica. The changes will propagate to all
secondary replicas, however they will only be effective on the named replica where the server-level login was
created.
Remember that by default a newly created user has a minimal set of permissions granted (for example, it cannot
access any user tables). If you want to allow third-party-user to read data in a table, you need to explicitly
grant the SELECT permission:

grant select on [Application].[Cities] to [third-party-user];

As an alternative to granting permissions individually on every table, you can add the user to the
db_datareaders database role to allow read access to all tables, or you can use schemas to allow access to all
existing and new tables in a schema.

Test access
You can test this configuration by using any client tool and attempt to connect to the primary and the named
replica. For example, using sqlcmd , you can try to connect to the primary replica using the third-party-login
user:

sqlcmd -S MyPrimaryServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d


WideWorldImporters
This will result in an error as the user is not allowed to connect to the server:

Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'third-party-login'. Reason:
The account is disabled.

The attempt to connect to the named replica succeeds:

sqlcmd -S MyNamedReplicaServer.database.windows.net -U third-party-login -P Just4STRONG_PAZzW0rd! -d


WideWorldImporters_NR

No errors are returned, and queries can be executed on the named replica as allowed by granted object-level
permissions.
For more information:
Azure SQL logical Servers, see What is a server in Azure SQL Database
Managing database access and logins, see SQL Database security: Manage database access and login security
Database engine permissions, see Permissions
Granting object permissions, see GRANT Object Permissions
What is a single database in Azure SQL Database?
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The single database resource type creates a database in Azure SQL Database with its own set of resources and is
managed via a server. With a single database, each database is isolated, using a dedicated database engine. Each
has its own service tier within the DTU-based purchasing model or vCore-based purchasing model and a
compute size defining the resources allocated to the database engine.
Single database is a deployment model for Azure SQL Database. The other is elastic pools.

Dynamic scalability
You can build your first app on a small, single database at low cost in the serverless compute tier or a small
compute size in the provisioned compute tier. You change the compute or service tier manually or
programmatically at any time to meet the needs of your solution. You can adjust performance without
downtime to your app or to your customers. Dynamic scalability enables your database to transparently
respond to rapidly changing resource requirements and enables you to only pay for the resources that you need
when you need them.

Single databases and elastic pools


A single database can be moved into or out of an elastic pool for resource sharing. For many businesses and
applications, being able to create single databases and dial performance up or down on demand is enough,
especially if usage patterns are relatively predictable. But if you have unpredictable usage patterns, it can make it
hard to manage costs and your business model. Elastic pools are designed to solve this problem. The concept is
simple. You allocate performance resources to a pool rather than an individual database and pay for the
collective performance resources of the pool rather than for single database performance.

Monitoring and alerting


You use the built-in performance monitoring and alerting tools, combined with the performance ratings. Using
these tools, you can quickly assess the impact of scaling up or down based on your current or project
performance needs. Additionally, SQL Database can emit metrics and resource logs for easier monitoring.

Availability capabilities
Single databases and elastic pools provide many availability characteristics. For information, see Availability
characteristics.

Transact-SQL differences
Most Transact-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure
SQL Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical,
and cursor functions, work identically in SQL Server and SQL Database. There are, however, a few T-SQL
differences in DDL (data-definition language) and DML (data manipulation language) elements resulting in T-
SQL statements and queries that are only partially supported (which we discuss later in this article).
In addition, there are some features and syntax that are not supported because Azure SQL Database is designed
to isolate features from dependencies on the master database and the operating system. As such, most server-
level activities are inappropriate for SQL Database. T-SQL statements and options are not available if they
configure server-level options, configure operating system components, or specify file system configuration.
When such capabilities are required, an appropriate alternative is often available in some other way from SQL
Database or from another Azure feature or service.
For more information, see Resolving Transact-SQL differences during migration to SQL Database.

Security
SQL Database provides a range of built-in security and compliance features to help your application meet
various security and compliance requirements.

IMPORTANT
Azure SQL Database has been certified against a number of compliance standards. For more information, see the
Microsoft Azure Trust Center, where you can find the most current list of SQL Database compliance certifications.

Next steps
To quickly get started with a single database, start with the Single database quickstart guide.
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Elastic pools help you manage and scale multiple
databases in Azure SQL Database
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple
databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single
server and share a set number of resources at a set price. Elastic pools in SQL Database enable software as a
service (SaaS) developers to optimize the price performance for a group of databases within a prescribed
budget while delivering performance elasticity for each database.

What are SQL elastic pools?


SaaS developers build applications on top of large-scale data tiers that consist of multiple databases. A common
application pattern is to provision a single database for each customer. But different customers often have
varying and unpredictable usage patterns, and it's difficult to predict the resource requirements of each
individual database user. Traditionally, you had two options:
Overprovision resources based on peak usage and overpay.
Underprovision to save cost, at the expense of performance and customer satisfaction during peaks.
Elastic pools solve this problem by ensuring that databases get the performance resources they need when they
need it. They provide a simple resource allocation mechanism within a predictable budget. To learn more about
design patterns for SaaS applications by using elastic pools, see Design patterns for multitenant SaaS
applications with SQL Database.

IMPORTANT
There's no per-database charge for elastic pools. You're billed for each hour a pool exists at the highest eDTU or vCores,
regardless of usage or whether the pool was active for less than an hour.

Elastic pools enable you to purchase resources for a pool shared by multiple databases to accommodate
unpredictable periods of usage by individual databases. You can configure resources for the pool based either
on the DTU-based purchasing model or the vCore-based purchasing model. The resource requirement for a
pool is determined by the aggregate utilization of its databases.
The amount of resources available to the pool is controlled by your budget. All you have to do is:
Add databases to the pool.
Optionally set the minimum and maximum resources for the databases. These resources are either minimum
and maximum DTUs or minimum or maximum vCores depending on your choice of resourcing model.
Set the resources of the pool based on your budget.
You can use pools to seamlessly grow your service from a lean startup to a mature business at ever-increasing
scale.
Within the pool, individual databases are given the flexibility to use resources within set parameters. Under
heavy load, a database can consume more resources to meet demand. Databases under light loads consume
less, and databases under no load consume no resources. Provisioning resources for the entire pool rather than
for single databases simplifies your management tasks. Plus, you have a predictable budget for the pool.
More resources can be added to an existing pool with minimum downtime. If extra resources are no longer
needed, they can be removed from an existing pool at any time. You can also add or remove databases from the
pool. If a database is predictably underutilizing resources, you can move it out.

NOTE
When you move databases into or out of an elastic pool, there's no downtime except for a brief period (on the order of
seconds) at the end of the operation when database connections are dropped.

When should you consider a SQL Database elastic pool?


Pools are well suited for a large number of databases with specific utilization patterns. For a given database, this
pattern is characterized by low average utilization with infrequent utilization spikes. Conversely, multiple
databases with persistent medium-high utilization shouldn't be placed in the same elastic pool.
The more databases you can add to a pool, the greater your savings become. Depending on your application
utilization pattern, it's possible to see savings with as few as two S3 databases.
The following sections help you understand how to assess if your specific collection of databases can benefit
from being in a pool. The examples use Standard pools, but the same principles also apply to Basic and Premium
pools.
Assess database utilization patterns
The following figure shows an example of a database that spends much of its time idle but also periodically
spikes with activity. This utilization pattern is suited for a pool.

The chart illustrates DTU usage over one hour from 12:00 to 1:00 where each data point has one-minute
granularity. At 12:10, DB1 peaks up to 90 DTUs, but its overall average usage is less than five DTUs. An S3
compute size is required to run this workload in a single database, but this size leaves most of the resources
unused during periods of low activity.
A pool allows these unused DTUs to be shared across multiple databases. A pool reduces the DTUs needed and
the overall cost.
Building on the previous example, suppose there are other databases with similar utilization patterns as DB1. In
the next two figures, the utilization of four databases and 20 databases are layered onto the same graph to
illustrate the nonoverlapping nature of their utilization over time by using the DTU-based purchasing model:

The aggregate DTU utilization across all 20 databases is illustrated by the black line in the preceding chart. This
line shows that the aggregate DTU utilization never exceeds 100 DTUs and indicates that the 20 databases can
share 100 eDTUs over this time period. The result is a 20-time reduction in DTUs and a 13-time price reduction
compared to placing each of the databases in S3 compute sizes for single databases.
This example is ideal because:
There are large differences between peak utilization and average utilization per database.
The peak utilization for each database occurs at different points in time.
eDTUs are shared between many databases.
In the DTU purchasing model, the price of a pool is a function of the pool eDTUs. While the eDTU unit price for a
pool is 1.5 times greater than the DTU unit price for a single database, pool eDTUs can be shared by many
databases and fewer total eDTUs are needed. These distinctions in pricing and eDTU sharing are the basis of the
price savings potential that pools can provide.
In the vCore purchasing model, the vCore unit price for elastic pools is the same as the vCore unit price for
single databases.
How do I choose the correct pool size?
The best size for a pool depends on the aggregate resources needed for all databases in the pool. You need to
determine:
Maximum compute resources utilized by all databases in the pool. Compute resources are indexed by either
eDTUs or vCores depending on your choice of purchasing model.
Maximum storage bytes utilized by all databases in the pool.
For service tiers and resource limits in each purchasing model, see the DTU-based purchasing model or the
vCore-based purchasing model.
The following steps can help you estimate whether a pool is more cost-effective than single databases:
1. Estimate the eDTUs or vCores needed for the pool:
For the DTU-based purchasing model:
MAX(<Total number of DBs × Average DTU utilization per DB>, <Number of concurrently
peaking DBs × Peak DTU utilization per DB>)
For the vCore-based purchasing model:
MAX(<Total number of DBs × Average vCore utilization per DB>, <Number of concurrently
peaking DBs × Peak vCore utilization per DB>)
2. Estimate the total storage space needed for the pool by adding the data size needed for all the databases in
the pool. For the DTU purchasing model, determine the eDTU pool size that provides this amount of storage.
3. For the DTU-based purchasing model, take the larger of the eDTU estimates from step 1 and step 2. For the
vCore-based purchasing model, take the vCore estimate from step 1.
4. See the SQL Database pricing page and find the smallest pool size that's greater than the estimate from step
3.
5. Compare the pool price from step 4 to the price of using the appropriate compute sizes for single databases.

IMPORTANT
If the number of databases in a pool approaches the maximum supported, make sure to consider resource management
in dense elastic pools.

Per-database properties
You can optionally set per-database properties to modify resource consumption patterns in elastic pools. For
more information, see resource limits documentation for DTU and vCore elastic pools.

Use other SQL Database features with elastic pools


You can use other SQL Database features with elastic pools.
Elastic jobs and elastic pools
With a pool, management tasks are simplified by running scripts in elastic jobs. An elastic job eliminates most of
the tedium associated with large numbers of databases.
For more information about other database tools for working with multiple databases, see Scaling out with SQL
Database.
Business continuity options for databases in an elastic pool
Pooled databases generally support the same business-continuity features that are available to single databases:
Point-in-time restore : Point-in-time restore uses automatic database backups to recover a database in a
pool to a specific point in time. See Point-in-time restore.
Geo-restore : Geo-restore provides the default recovery option when a database is unavailable because of
an incident in the region where the database is hosted. See Restore a SQL database or fail over to a
secondary.
Active geo-replication : For applications that have more aggressive recovery requirements than geo-
restore can offer, configure active geo-replication or an auto-failover group.

Create a new SQL Database elastic pool by using the Azure portal
You can create an elastic pool in the Azure portal in two ways:
Create an elastic pool and select an existing or new server.
Create an elastic pool from an existing server.
To create an elastic pool and select an existing or new server:
1. Go to the Azure portal to create an elastic pool. Search for and select Azure SQL .
2. Select Create to open the Select SQL deployment option pane. To view more information about
elastic pools, on the Databases tile, select Show details .
3. On the Databases tile, in the Resource type dropdown, select Elastic pool . Then select Create .

To create an elastic pool from an existing server:


Go to an existing server and select New pool to create a pool directly in that server.

NOTE
You can create multiple pools on a server, but you can't add databases from different servers into the same pool.

The pool's service tier determines the features available to the elastics in the pool, and the maximum amount of
resources available to each database. For more information, see resource limits for elastic pools in the DTU
model. For vCore-based resource limits for elastic pools, see vCore-based resource limits - elastic pools.
To configure the resources and pricing of the pool, select Configure pool . Then select a service tier, add
databases to the pool, and configure the resource limits for the pool and its databases.
After you've configured the pool, select Apply , name the pool, and select OK to create the pool.

Monitor an elastic pool and its databases


In the Azure portal, you can monitor the utilization of an elastic pool and the databases within that pool. You can
also make a set of changes to your elastic pool and submit all changes at the same time. These changes include
adding or removing databases, changing your elastic pool settings, or changing your database settings.
You can use the built-in performance monitoring and alerting tools combined with performance ratings. SQL
Database can also emit metrics and resource logs for easier monitoring.

Customer case studies


SnelStart: SnelStart used elastic pools with SQL Database to rapidly expand its business services at a rate of
1,000 new SQL databases per month.
Umbraco: Umbraco uses elastic pools with SQL Database to quickly provision and scale services for
thousands of tenants in the cloud.
Daxko/CSI: Daxko/CSI uses elastic pools with SQL Database to accelerate its development cycle and to
enhance its customer services and performance.

Next steps
For pricing information, see Elastic pool pricing.
To scale elastic pools, see Scale elastic pools and Scale an elastic pool - sample code.
To learn more about design patterns for SaaS applications by using elastic pools, see Design patterns for
multitenant SaaS applications with SQL Database.
For a SaaS tutorial by using elastic pools, see Introduction to the Wingtip SaaS application.
To learn about resource management in elastic pools with many databases, see Resource management in
dense elastic pools.
What is a logical SQL server in Azure SQL
Database and Azure Synapse?
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


In Azure SQL Database and Azure Synapse Analytics, a server is a logical construct that acts as a central
administrative point for a collection of databases. At the server level, you can administer logins, firewall rules,
auditing rules, threat detection policies, and auto-failover groups . A server can be in a different region than its
resource group. The server must exist before you can create a database in Azure SQL Database or a data
warehouse database in Azure Synapse Analytics. All databases managed by a single server are created within
the same region as the server.
This server is distinct from a SQL Server instance that you may be familiar with in the on-premises world.
Specifically, there are no guarantees regarding location of the databases or data warehouse database in relation
to the server that manages them. Furthermore, neither Azure SQL Database nor Azure Synapse expose any
instance-level access or features. In contrast, the instance databases in a managed instance are all physically co-
located - in the same way that you are familiar with SQL Server in the on-premises or virtual machine world.
When you create a server, you provide a server login account and password that has administrative rights to the
master database on that server and all databases created on that server. This initial account is a SQL login
account. Azure SQL Database and Azure Synapse Analytics support SQL authentication and Azure Active
Directory Authentication for authentication. For information about logins and authentication, see Managing
Databases and Logins in Azure SQL Database. Windows Authentication is not supported.
A server in SQL Database and Azure Synapse:
Is created within an Azure subscription, but can be moved with its contained resources to another
subscription
Is the parent resource for databases, elastic pools, and data warehouses
Provides a namespace for databases, elastic pools, and data warehouse database
Is a logical container with strong lifetime semantics - delete a server and it deletes its databases, elastic pools,
and SQK pools
Participates in Azure role-based access control (Azure RBAC) - databases, elastic pools, and data warehouse
database within a server inherit access rights from the server
Is a high-order element of the identity of databases, elastic pools, and data warehouse database for Azure
resource management purposes (see the URL scheme for databases and pools)
Collocates resources in a region
Provides a connection endpoint for database access ( <serverName> .database.windows.net)
Provides access to metadata regarding contained resources via DMVs by connecting to a master database
Provides the scope for management policies that apply to its databases - logins, firewall, audit, threat
detection, and such
Is restricted by a quota within the parent subscription (six servers per subscription by default - see
Subscription limits here)
Provides the scope for database quota and DTU or vCore quota for the resources it contains (such as 45,000
DTU)
Is the versioning scope for capabilities enabled on contained resources
Server-level principal logins can manage all databases on a server
Can contain logins similar to those in instances of SQL Server in your on-premises environment that are
granted access to one or more databases on the server, and can be granted limited administrative rights. For
more information, see Logins.
The default collation for all databases created on a server is SQL_LATIN1_GENERAL_CP1_CI_AS , where
LATIN1_GENERAL is English (United States), CP1 is code page 1252, CI is case-insensitive, and AS is accent-
sensitive.

Manage servers, databases, and firewalls using the Azure portal


You can create the resource group for a server ahead of time or while creating the server itself. There are
multiple methods for getting to a new SQL server form, either by creating a new SQL server or as part of
creating a new database.
Create a blank server
To create a server (without a database, elastic pool, or data warehouse database) using the Azure portal,
navigate to a blank SQL server (logical SQL server) form.
Create a blank or sample database in Azure SQL Database
To create a database in SQL Database using the Azure portal, navigate to a blank SQL Database form and
provide the requested information. You can create the resource group and server ahead of time or while
creating the database itself. You can create a blank database or create a sample database based on Adventure
Works LT.

IMPORTANT
For information on selecting the pricing tier for your database, see DTU-based purchasing model and vCore-based
purchasing model.
To create a managed instance, see Create a managed instance
Manage an existing server
To manage an existing server, navigate to the server using a number of methods - such as from specific
database page, the SQL ser vers page, or the All resources page.
To manage an existing database, navigate to the SQL databases page and click the database you wish to
manage. The following screenshot shows how to begin setting a server-level firewall for a database from the
Over view page for a database.

IMPORTANT
To configure performance properties for a database, see DTU-based purchasing model and vCore-based purchasing
model.

TIP
For an Azure portal quickstart, see Create a database in SQL Database in the Azure portal.

Manage servers, databases, and firewalls using PowerShell


NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.

To create and manage servers, databases, and firewalls with Azure PowerShell, use the following PowerShell
cmdlets. If you need to install or upgrade PowerShell, see Install Azure PowerShell module. For creating and
managing elastic pools, see Elastic pools.

C M DL ET DESC RIP T IO N

New-AzSqlDatabase Creates a database

Get-AzSqlDatabase Gets one or more databases

Set-AzSqlDatabase Sets properties for a database, or moves an existing


database into an elastic pool

Remove-AzSqlDatabase Removes a database

New-AzResourceGroup Creates a resource group

New-AzSqlServer Creates a server

Get-AzSqlServer Returns information about servers

Set-AzSqlServer Modifies properties of a server

Remove-AzSqlServer Removes a server

New-AzSqlServerFirewallRule Creates a server-level firewall rule

Get-AzSqlServerFirewallRule Gets firewall rules for a server

Set-AzSqlServerFirewallRule Modifies a firewall rule in a server

Remove-AzSqlServerFirewallRule Deletes a firewall rule from a server.

New-AzSqlServerVirtualNetworkRule Creates a virtual network rule, based on a subnet that is a


Virtual Network service endpoint.

TIP
For a PowerShell quickstart, see Create a database in Azure SQL Database using PowerShell. For PowerShell example
scripts, see Use PowerShell to create a database in Azure SQL Database and configure a firewall rule and Monitor and
scale a database in Azure SQL Database using PowerShell.

Manage servers, databases, and firewalls using the Azure CLI


To create and manage servers, databases, and firewalls with the Azure CLI, use the following Azure CLI SQL
Database commands. Use the Cloud Shell to run the CLI in your browser, or install it on macOS, Linux, or
Windows. For creating and managing elastic pools, see Elastic pools.

C M DL ET DESC RIP T IO N

az sql db create Creates a database

az sql db list Lists all databases managed by a server, or all databases in


an elastic pool
C M DL ET DESC RIP T IO N

az sql db list-editions Lists available service objectives and storage limits

az sql db list-usages Returns database usages

az sql db show Gets a database

az sql db update Updates a database

az sql db delete Removes a database

az group create Creates a resource group

az sql server create Creates a server

az sql server list Lists servers

az sql server list-usages Returns server usages

az sql server show Gets a server

az sql server update Updates a server

az sql server delete Deletes a server

az sql server firewall-rule create Creates a server firewall rule

az sql server firewall-rule list Lists the firewall rules on a server

az sql server firewall-rule show Shows the detail of a firewall rule

az sql server firewall-rule update Updates a firewall rule

az sql server firewall-rule delete Deletes a firewall rule

TIP
For an Azure CLI quickstart, see Create a database in Azure SQL Database using the Azure CLI. For Azure CLI example
scripts, see Use the CLI to create a database in Azure SQL Database and configure a firewall rule and Use Azure CLI to
monitor and scale a database in Azure SQL Database.

Manage servers, databases, and firewalls using Transact-SQL


To create and manage servers, databases, and firewalls with Transact-SQL, use the following T-SQL commands.
You can issue these commands using the Azure portal, SQL Server Management Studio, Visual Studio Code, or
any other program that can connect to a server and pass Transact-SQL commands. For managing elastic pools,
see Elastic pools.
IMPORTANT
You cannot create or delete a server using Transact-SQL.

C OMMAND DESC RIP T IO N

CREATE DATABASE (Azure SQL Database) Creates a new database in Azure SQL Database. You must be
connected to the master database to create a new database.

CREATE DATABASE (Azure Synapse) Creates a new data warehouse database in Azure Synapse.
You must be connected to the master database to create a
new database.

ALTER DATABASE (Azure SQL Database) Modifies database or elastic pool.

ALTER DATABASE (Azure Synapse Analytics) Modifies a data warehouse database in Azure Synapse.

DROP DATABASE (Transact-SQL) Deletes a database.

sys.database_service_objectives (Azure SQL Database) Returns the edition (service tier), service objective (pricing
tier), and elastic pool name, if any, for a database. If logged
on to the master database for a server, returns information
on all databases. For Azure Synapse, you must be connected
to the master database.

sys.dm_db_resource_stats (Azure SQL Database) Returns CPU, IO, and memory consumption for a database
in Azure SQL Database. One row exists for every 15 seconds,
even if there is no activity in the database.

sys.resource_stats (Azure SQL Database) Returns CPU usage and storage data for a database in Azure
SQL Database. The data is collected and aggregated within
five-minute intervals.

sys.database_connection_stats (Azure SQL Database) Contains statistics for database connectivity events for Azure
SQL Database, providing an overview of database
connection successes and failures.

sys.event_log (Azure SQL Database) Returns successful database connections and connection
failures for Azure SQL Database. You can use this
information to track or troubleshoot your database activity.

sp_set_firewall_rule (Azure SQL Database) Creates or updates the server-level firewall settings for your
server. This stored procedure is only available in the master
database to the server-level principal login. A server-level
firewall rule can only be created using Transact-SQL after the
first server-level firewall rule has been created by a user with
Azure-level permissions

sys.firewall_rules (Azure SQL Database) Returns information about the server-level firewall settings
associated with a server.

sp_delete_firewall_rule (Azure SQL Database) Removes server-level firewall settings from a server. This
stored procedure is only available in the master database to
the server-level principal login.
C OMMAND DESC RIP T IO N

sp_set_database_firewall_rule (Azure SQL Database) Creates or updates the database-level firewall rules for a
database in Azure SQL Database. Database firewall rules can
be configured for the master database, and for user
databases in SQL Database. Database firewall rules are useful
when using contained database users. Database firewall rules
are not supported in Azure Synapse.

sys.database_firewall_rules (Azure SQL Database) Returns information about the database-level firewall
settings for a database in Azure SQL Database.

sp_delete_database_firewall_rule (Azure SQL Database) Removes database-level firewall setting for a database of
yours in Azure SQL Database.

TIP
For a quickstart using SQL Server Management Studio on Microsoft Windows, see Azure SQL Database: Use SQL Server
Management Studio to connect and query data. For a quickstart using Visual Studio Code on the macOS, Linux, or
Windows, see Azure SQL Database: Use Visual Studio Code to connect and query data.

Manage servers, databases, and firewalls using the REST API


To create and manage servers, databases, and firewalls, use these REST API requests.

C OMMAND DESC RIP T IO N

Servers - Create or update Creates or updates a new server.

Servers - Delete Deletes a server.

Servers - Get Gets a server.

Servers - List Returns a list of servers.

Servers - List by resource group Returns a list of servers in a resource group.

Servers - Update Updates an existing server.

Databases - Create or update Creates a new database or updates an existing database.

Databases - Delete Deletes a database.

Databases - Get Gets a database.

Databases - List by elastic pool Returns a list of databases in an elastic pool.

Databases - List by server Returns a list of databases in a server.

Databases - Update Updates an existing database.

Firewall rules - Create or update Creates or updates a firewall rule.


C OMMAND DESC RIP T IO N

Firewall rules - Delete Deletes a firewall rule.

Firewall rules - Get Gets a firewall rule.

Firewall rules - List by server Returns a list of firewall rules.

Next steps
To learn about migrating a SQL Server database to Azure SQL Database, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Azure SQL Database serverless
7/12/2022 • 17 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Serverless is a compute tier for single databases in Azure SQL Database that automatically scales compute
based on workload demand and bills for the amount of compute used per second. The serverless compute tier
also automatically pauses databases during inactive periods when only storage is billed and automatically
resumes databases when activity returns.

Serverless compute tier


The serverless compute tier for single databases in Azure SQL Database is parameterized by a compute
autoscaling range and an auto-pause delay. The configuration of these parameters shapes the database
performance experience and compute cost.

Performance configuration
The minimum vCores and maximum vCores are configurable parameters that define the range of
compute capacity available for the database. Memory and IO limits are proportional to the vCore range
specified.
The auto-pause delay is a configurable parameter that defines the period of time the database must be
inactive before it is automatically paused. The database is automatically resumed when the next login or
other activity occurs. Alternatively, automatic pausing can be disabled.
Cost
The cost for a serverless database is the summation of the compute cost and storage cost.
When compute usage is between the min and max limits configured, the compute cost is based on vCore and
memory used.
When compute usage is below the min limits configured, the compute cost is based on the min vCores and
min memory configured.
When the database is paused, the compute cost is zero and only storage costs are incurred.
The storage cost is determined in the same way as in the provisioned compute tier.
For more cost details, see Billing.

Scenarios
Serverless is price-performance optimized for single databases with intermittent, unpredictable usage patterns
that can afford some delay in compute warm-up after idle usage periods. In contrast, the provisioned compute
tier is price-performance optimized for single databases or multiple databases in elastic pools with higher
average usage that cannot afford any delay in compute warm-up.
Scenarios well suited for serverless compute
Single databases with intermittent, unpredictable usage patterns interspersed with periods of inactivity, and
lower average compute utilization over time.
Single databases in the provisioned compute tier that are frequently rescaled and customers who prefer to
delegate compute rescaling to the service.
New single databases without usage history where compute sizing is difficult or not possible to estimate
prior to deployment in SQL Database.
Scenarios well suited for provisioned compute
Single databases with more regular, predictable usage patterns and higher average compute utilization over
time.
Databases that cannot tolerate performance trade-offs resulting from more frequent memory trimming or
delays in resuming from a paused state.
Multiple databases with intermittent, unpredictable usage patterns that can be consolidated into elastic pools
for better price-performance optimization.

Comparison with provisioned compute tier


The following table summarizes distinctions between the serverless compute tier and the provisioned compute
tier:

SERVERL ESS C O M P UT E P RO VISIO N ED C O M P UT E

Database usage pattern Intermittent, unpredictable usage with More regular usage patterns with
lower average compute utilization over higher average compute utilization
time. over time, or multiple databases using
elastic pools.

Performance management effor t Lower Higher

Compute scaling Automatic Manual

Compute responsiveness Lower after inactive periods Immediate

Billing granularity Per second Per hour

Purchasing model and service tier


SQL Database serverless is currently only supported in the General Purpose tier on Generation 5 hardware in
the vCore purchasing model.

Autoscaling
Scaling responsiveness
In general, serverless databases are run on a machine with sufficient capacity to satisfy resource demand
without interruption for any amount of compute requested within limits set by the max vCores value.
Occasionally, load balancing automatically occurs if the machine is unable to satisfy resource demand within a
few minutes. For example, if the resource demand is 4 vCores, but only 2 vCores are available, then it may take
up to a few minutes to load balance before 4 vCores are provided. The database remains online during load
balancing except for a brief period at the end of the operation when connections are dropped.
Memory management
Memory for serverless databases is reclaimed more frequently than for provisioned compute databases. This
behavior is important to control costs in serverless and can impact performance.
Cache reclamation
Unlike provisioned compute databases, memory from the SQL cache is reclaimed from a serverless database
when CPU or active cache utilization is low.
Active cache utilization is considered low when the total size of the most recently used cache entries falls
below a threshold for a period of time.
When cache reclamation is triggered, the target cache size is reduced incrementally to a fraction of its
previous size and reclaiming only continues if usage remains low.
When cache reclamation occurs, the policy for selecting cache entries to evict is the same selection policy as
for provisioned compute databases when memory pressure is high.
The cache size is never reduced below the min memory limit as defined by min vCores, that can be
configured.
In both serverless and provisioned compute databases, cache entries may be evicted if all available memory is
used.
When CPU utilization is low, active cache utilization can remain high depending on the usage pattern and
prevent memory reclamation. Also, there can be other delays after user activity stops before memory
reclamation occurs due to periodic background processes responding to prior user activity. For example, delete
operations and Query Store cleanup tasks generate ghost records that are marked for deletion, but are not
physically deleted until the ghost cleanup process runs. Ghost cleanup may involve reading additional data
pages into cache.
Cache hydration
The SQL cache grows as data is fetched from disk in the same way and with the same speed as for provisioned
databases. When the database is busy, the cache is allowed to grow unconstrained up to the max memory limit.

Auto-pausing and auto-resuming


Auto -pausing
Auto-pausing is triggered if all of the following conditions are true for the duration of the auto-pause delay:
Number of sessions = 0
CPU = 0 for user workload running in the user resource pool
An option is provided to disable auto-pausing if desired.
The following features do not support auto-pausing, but do support auto-scaling. If any of the following
features are used, then auto-pausing must be disabled and the database will remain online regardless of the
duration of database inactivity:
Geo-replication (active geo-replication and auto-failover groups).
Long-term backup retention (LTR).
The sync database used in SQL Data Sync. Unlike sync databases, hub and member databases support auto-
pausing.
DNS alias created for the logical server containing a serverless database.
Elastic Jobs (preview), when the job database is a serverless database. Databases targeted by elastic jobs
support auto-pausing, and will be resumed by job connections.
Auto-pausing is temporarily prevented during the deployment of some service updates which require the
database be online. In such cases, auto-pausing becomes allowed again once the service update completes.
Auto-pause troubleshooting
If auto-pausing is enabled, but a database does not auto-pause after the delay period, and the features listed
above are not used, the application or user sessions may be preventing auto-pausing. To see if there are any
application or user sessions currently connected to the database, connect to the database using any client tool,
and execute the following query:

SELECT session_id,
host_name,
program_name,
client_interface_name,
login_name,
status,
login_time,
last_request_start_time,
last_request_end_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_resource_governor_workload_groups AS wg
ON s.group_id = wg.group_id
WHERE s.session_id <> @@SPID
AND
(
(
wg.name like 'UserPrimaryGroup.DB%'
AND
TRY_CAST(RIGHT(wg.name, LEN(wg.name) - LEN('UserPrimaryGroup.DB') - 2) AS int) = DB_ID()
)
OR
wg.name = 'DACGroup'
);

TIP
After running the query, make sure to disconnect from the database. Otherwise, the open session used by the query will
prevent auto-pausing.

If the result set is non-empty, it indicates that there are sessions currently preventing auto-pausing.
If the result set is empty, it is still possible that sessions were open, possibly for a short time, at some point
earlier during the auto-pause delay period. To see if such activity has occurred during the delay period, you can
use Azure SQL Auditing and examine audit data for the relevant period.
The presence of open sessions, with or without concurrent CPU utilization in the user resource pool, is the most
common reason for a serverless database to not auto-pause as expected.
Auto -resuming
Auto-resuming is triggered if any of the following conditions are true at any time:

F EAT URE A UTO - RESUM E T RIGGER

Authentication and authorization Login

Threat detection Enabling/disabling threat detection settings at the database


or server level.
Modifying threat detection settings at the database or
server level.
F EAT URE A UTO - RESUM E T RIGGER

Data discovery and classification Adding, modifying, deleting, or viewing sensitivity labels

Auditing Viewing auditing records.


Updating or viewing auditing policy.

Data masking Adding, modifying, deleting, or viewing data masking rules

Transparent data encryption Viewing state or status of transparent data encryption

Vulnerability assessment Ad hoc scans and periodic scans if enabled

Query (performance) data store Modifying or viewing query store settings

Performance recommendations Viewing or applying performance recommendations

Auto-tuning Application and verification of auto-tuning


recommendations such as auto-indexing

Database copying Create database as copy.


Export to a BACPAC file.

SQL data sync Synchronization between hub and member databases that
run on a configurable schedule or are performed manually

Modifying certain database metadata Adding new database tags.


Changing max vCores, min vCores, or auto-pause delay.

SQL Server Management Studio (SSMS) Using SSMS versions earlier than 18.1 and opening a new
query window for any database in the server will resume any
auto-paused database in the same server. This behavior
does not occur if using SSMS version 18.1 or later.

Monitoring, management, or other solutions performing any of the operations listed above will trigger auto-
resuming.
Auto-resuming is also triggered during the deployment of some service updates that require the database be
online.
Connectivity
If a serverless database is paused, then the first login will resume the database and return an error stating that
the database is unavailable with error code 40613. Once the database is resumed, the login must be retried to
establish connectivity. Database clients with connection retry logic should not need to be modified. For
connection retry logic options that are built-in to the SqlClient driver, see configurable retry logic in SqlClient.
Latency
The latency to auto-resume and auto-pause a serverless database is generally order of 1 minute to auto-resume
and 1-10 minutes after the expiration of the delay period to auto-pause.
Customer managed transparent data encryption (BYOK )
If using customer managed transparent data encryption (BYOK) and the serverless database is auto-paused
when key deletion or revocation occurs, then the database remains in the auto-paused state. In this case, after
the database is next resumed, the database becomes inaccessible within approximately 10 minutes. Once the
database becomes inaccessible, the recovery process is the same as for provisioned compute databases. If the
serverless database is online when key deletion or revocation occurs, then the database also becomes
inaccessible within approximately 10 minutes in the same way as with provisioned compute databases.

Onboarding into serverless compute tier


Creating a new database or moving an existing database into a serverless compute tier follows the same pattern
as creating a new database in provisioned compute tier and involves the following two steps.
1. Specify the service objective. The service objective prescribes the service tier, hardware configuration, and
max vCores. For service objective options, see serverless resource limits
2. Optionally, specify the min vCores and auto-pause delay to change their default values. The following
table shows the available values for these parameters.

PA RA M ET ER VA L UE C H O IC ES DEFA ULT VA L UE

Min vCores Depends on max vCores configured 0.5 vCores


- see resource limits.

Autopause delay Minimum: 60 minutes (1 hour) 60 minutes


Maximum: 10080 minutes (7 days)
Increments: 10 minutes
Disable autopause: -1

Create a new database in the serverless compute tier


The following examples create a new database in the serverless compute tier.
Use Azure portal
See Quickstart: Create a single database in Azure SQL Database using the Azure portal.
Use PowerShell

New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName


`
-ComputeModel Serverless -Edition GeneralPurpose -ComputeGeneration Gen5 `
-MinVcore 0.5 -MaxVcore 2 -AutoPauseDelayInMinutes 720

Use Azure CLI

az sql db create -g $resourceGroupName -s $serverName -n $databaseName `


-e GeneralPurpose -f Gen5 --min-capacity 0.5 -c 2 --compute-model Serverless --auto-pause-delay 720

Use Transact-SQL (T-SQL)


When using T-SQL, default values are applied for the min vcores and autopause delay. They can later be
changed from the portal or via other management APIs (PowerShell, Azure CLI, REST API).

CREATE DATABASE testdb


( EDITION = 'GeneralPurpose', SERVICE_OBJECTIVE = 'GP_S_Gen5_1' ) ;

For details, see CREATE DATABASE.


Move a database from the provisioned compute tier into the serverless compute tier
The following examples move a database from the provisioned compute tier into the serverless compute tier.
Use PowerShell
Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName
`
-Edition GeneralPurpose -ComputeModel Serverless -ComputeGeneration Gen5 `
-MinVcore 1 -MaxVcore 4 -AutoPauseDelayInMinutes 1440

Use Azure CLI

az sql db update -g $resourceGroupName -s $serverName -n $databaseName `


--edition GeneralPurpose --min-capacity 1 --capacity 4 --family Gen5 --compute-model Serverless --auto-
pause-delay 1440

Use Transact-SQL (T-SQL)


When using T-SQL, default values are applied for the min vcores and auto-pause delay. They can later be
changed from the portal or via other management APIs (PowerShell, Azure CLI, REST API).

ALTER DATABASE testdb


MODIFY ( SERVICE_OBJECTIVE = 'GP_S_Gen5_1') ;

For details, see ALTER DATABASE.


Move a database from the serverless compute tier into the provisioned compute tier
A serverless database can be moved into a provisioned compute tier in the same way as moving a provisioned
compute database into a serverless compute tier.

Modifying serverless configuration


Use PowerShell
Modifying the maximum or minimum vCores, and autopause delay, is performed by using the Set-
AzSqlDatabase command in PowerShell using the MaxVcore , MinVcore , and AutoPauseDelayInMinutes
arguments.
Use Azure CLI
Modifying the maximum or minimum vCores, and autopause delay, is performed by using the az sql db update
command in Azure CLI using the capacity , min-capacity , and auto-pause-delay arguments.

Monitoring
Resources used and billed
The resources of a serverless database are encapsulated by app package, SQL instance, and user resource pool
entities.
App package
The app package is the outer most resource management boundary for a database, regardless of whether the
database is in a serverless or provisioned compute tier. The app package contains the SQL instance and external
services like Full-text Search that all together scope all user and system resources used by a database in SQL
Database. The SQL instance generally dominates the overall resource utilization across the app package.
User resource pool
The user resource pool is an inner resource management boundary for a database, regardless of whether the
database is in a serverless or provisioned compute tier. The user resource pool scopes CPU and IO for user
workload generated by DDL queries such as CREATE and ALTER, DML queries such as INSERT, UPDATE, DELETE,
and MERGE, and SELECT queries. These queries generally represent the most substantial proportion of
utilization within the app package.
Metrics
Metrics for monitoring the resource usage of the app package and user resource pool of a serverless database
are listed in the following table:

EN T IT Y M ET RIC DESC RIP T IO N UN IT S

App package app_cpu_percent Percentage of vCores used Percentage


by the app relative to max
vCores allowed for the app.

App package app_cpu_billed The amount of compute vCore seconds


billed for the app during the
reporting period. The
amount paid during this
period is the product of this
metric and the vCore unit
price.

Values of this metric are


determined by aggregating
over time the maximum of
CPU used and memory
used each second. If the
amount used is less than
the minimum amount
provisioned as set by the
min vCores and min
memory, then the minimum
amount provisioned is
billed.In order to compare
CPU with memory for
billing purposes, memory is
normalized into units of
vCores by rescaling the
amount of memory in GB
by 3 GB per vCore.

App package app_memory_percent Percentage of memory used Percentage


by the app relative to max
memory allowed for the
app.

User resource pool cpu_percent Percentage of vCores used Percentage


by user workload relative to
max vCores allowed for user
workload.

User resource pool data_IO_percent Percentage of data IOPS Percentage


used by user workload
relative to max data IOPS
allowed for user workload.

User resource pool log_IO_percent Percentage of log MB/s Percentage


used by user workload
relative to max log MB/s
allowed for user workload.
EN T IT Y M ET RIC DESC RIP T IO N UN IT S

User resource pool workers_percent Percentage of workers used Percentage


by user workload relative to
max workers allowed for
user workload.

User resource pool sessions_percent Percentage of sessions used Percentage


by user workload relative to
max sessions allowed for
user workload.

Pause and resume status


In the Azure portal, the database status is displayed in the overview pane of the server that lists the databases it
contains. The database status is also displayed in the overview pane for the database.
Using the following commands to query the pause and resume status of a database:
Use PowerShell

Get-AzSqlDatabase -ResourceGroupName $resourcegroupname -ServerName $servername -DatabaseName $databasename


`
| Select -ExpandProperty "Status"

Use Azure CLI

az sql db show --name $databasename --resource-group $resourcegroupname --server $servername --query


'status' -o json

Resource limits
For resource limits, see serverless compute tier.

Billing
The amount of compute billed is the maximum of CPU used and memory used each second. If the amount of
CPU used and memory used is less than the minimum amount provisioned for each, then the provisioned
amount is billed. In order to compare CPU with memory for billing purposes, memory is normalized into units
of vCores by rescaling the amount of memory in GB by 3 GB per vCore.
Resource billed : CPU and memory
Amount billed : vCore unit price * max (min vCores, vCores used, min memory GB * 1/3, memory GB used *
1/3)
Billing frequency : Per second
The vCore unit price is the cost per vCore per second. Refer to the Azure SQL Database pricing page for specific
unit prices in a given region.
The amount of compute billed is exposed by the following metric:
Metric : app_cpu_billed (vCore seconds)
Definition : max (min vCores, vCores used, min memory GB * 1/3, memory GB used * 1/3)
Repor ting frequency : Per minute
This quantity is calculated each second and aggregated over 1 minute.
Minimum compute bill
If a serverless database is paused, then the compute bill is zero. If a serverless database is not paused, then the
minimum compute bill is no less than the amount of vCores based on max (min vCores, min memory GB * 1/3).
Examples:
Suppose a serverless database is not paused and configured with 8 max vCores and 1 min vCore
corresponding to 3.0 GB min memory. Then the minimum compute bill is based on max (1 vCore, 3.0 GB * 1
vCore / 3 GB) = 1 vCore.
Suppose a serverless database is not paused and configured with 4 max vCores and 0.5 min vCores
corresponding to 2.1 GB min memory. Then the minimum compute bill is based on max (0.5 vCores, 2.1 GB *
1 vCore / 3 GB) = 0.7 vCores.
The Azure SQL Database pricing calculator for serverless can be used to determine the min memory
configurable based on the number of max and min vCores configured. As a rule, if the min vCores configured is
greater than 0.5 vCores, then the minimum compute bill is independent of the min memory configured and
based only on the number of min vCores configured.
Example scenario
Consider a serverless database configured with 1 min vCore and 4 max vCores. This configuration corresponds
to around 3 GB min memory and 12 GB max memory. Suppose the auto-pause delay is set to 6 hours and the
database workload is active during the first 2 hours of a 24-hour period and otherwise inactive.
In this case, the database is billed for compute and storage during the first 8 hours. Even though the database is
inactive starting after the second hour, it is still billed for compute in the subsequent 6 hours based on the
minimum compute provisioned while the database is online. Only storage is billed during the remainder of the
24-hour period while the database is paused.
More precisely, the compute bill in this example is calculated as follows:

VC O RE SEC O N DS
VC O RES USED EA C H GB USED EA C H C O M P UT E B IL L ED O VER T IM E
T IM E IN T ERVA L SEC O N D SEC O N D DIM EN SIO N B IL L ED IN T ERVA L

0:00-1:00 4 9 vCores used 4 vCores * 3600


seconds = 14400
vCore seconds

1:00-2:00 1 12 Memory used 12 GB * 1/3 * 3600


seconds = 14400
vCore seconds

2:00-8:00 0 0 Min memory 3 GB * 1/3 * 21600


provisioned seconds = 21600
vCore seconds

8:00-24:00 0 0 No compute billed 0 vCore seconds


while paused

Total vCore seconds 50400 vCore


billed over 24 hours seconds

Suppose the compute unit price is $0.000145/vCore/second. Then the compute billed for this 24-hour period is
the product of the compute unit price and vCore seconds billed: $0.000145/vCore/second * 50400 vCore
seconds ~ $7.31.
Azure Hybrid Benefit and reserved capacity
Azure Hybrid Benefit (AHB) and reserved capacity discounts do not apply to the serverless compute tier.

Available regions
The serverless compute tier is available worldwide except the following regions: China East, China North,
Germany Central, Germany Northeast, and US Gov Central (Iowa).

Next steps
To get started, see Quickstart: Create a single database in Azure SQL Database using the Azure portal.
For resource limits, see Serverless compute tier resource limits.
Hyperscale service tier
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database is based on SQL Server Database Engine architecture that is adjusted for the cloud
environment to ensure high availability even in cases of infrastructure failures. There are three architectural
models that are used in Azure SQL Database:
General Purpose/Standard
Hyperscale
Business Critical/Premium
The Hyperscale service tier in Azure SQL Database is the newest service tier in the vCore-based purchasing
model. This service tier is a highly scalable storage and compute performance tier that uses the Azure
architecture to scale out the storage and compute resources for an Azure SQL Database substantially beyond
the limits available for the General Purpose and Business Critical service tiers.

NOTE
For details on the General Purpose and Business Critical service tiers in the vCore-based purchasing model, see
General Purpose and Business Critical service tiers. For a comparison of the vCore-based purchasing model with the
DTU-based purchasing model, see Azure SQL Database purchasing models and resources.
The Hyperscale service tier is currently only available for Azure SQL Database, and not Azure SQL Managed Instance.

What are the Hyperscale capabilities


The Hyperscale service tier in Azure SQL Database provides the following additional capabilities:
Support for up to 100 TB of database size.
Fast database backups (based on file snapshots stored in Azure Blob storage) regardless of size with no IO
impact on compute resources.
Fast database restores (based on file snapshots) in minutes rather than hours or days (not a size of data
operation).
Higher overall performance due to higher transaction log throughput and faster transaction commit times
regardless of data volumes.
Rapid scale out - you can provision one or more read-only replicas for offloading your read workload and for
use as hot-standbys.
Rapid Scale up - you can, in constant time, scale up your compute resources to accommodate heavy
workloads when needed, and then scale the compute resources back down when not needed.
The Hyperscale service tier removes many of the practical limits traditionally seen in cloud databases. Where
most other databases are limited by the resources available in a single node, databases in the Hyperscale service
tier have no such limits. With its flexible storage architecture, storage grows as needed. In fact, Hyperscale
databases aren't created with a defined max size. A Hyperscale database grows as needed - and you're billed
only for the capacity you use. For read-intensive workloads, the Hyperscale service tier provides rapid scale-out
by provisioning additional replicas as needed for offloading read workloads.
Additionally, the time required to create database backups or to scale up or down is no longer tied to the volume
of data in the database. Hyperscale databases can be backed up virtually instantaneously. You can also scale a
database in the tens of terabytes up or down in minutes. This capability frees you from concerns about being
boxed in by your initial configuration choices.
For more information about the compute sizes for the Hyperscale service tier, see Service tier characteristics.

Who should consider the Hyperscale service tier


The Hyperscale service tier is intended for all customers who require higher performance and availability, fast
backup and restore, and/or fast storage and compute scalability. This includes customers who are moving to the
cloud to modernize their applications as well as customers who are already using other service tiers in Azure
SQL Database. The Hyperscale service tier supports a broad range of database workloads, from pure OLTP to
pure analytics. It is optimized for OLTP and hybrid transaction and analytical processing (HTAP) workloads.

IMPORTANT
Elastic pools do not support the Hyperscale service tier.

Hyperscale pricing model


Hyperscale service tier is only available in vCore model. To align with the new architecture, the pricing model is
slightly different from General Purpose or Business Critical service tiers:
Compute :
The Hyperscale compute unit price is per replica. The Azure Hybrid Benefit price is applied to high-
availabilty and named replicas automatically. Users may adjust the total number of high-availability
secondary replicas from 0 to 4, depending on availability and scalability requirements, and create up to
30 named replicas to support a variety of read scale-out workloads.
Storage :
You don't need to specify the max data size when configuring a Hyperscale database. In the Hyperscale
tier, you're charged for storage for your database based on actual allocation. Storage is automatically
allocated between 40 GB and 100 TB, in 10-GB increments. Multiple data files can grow at the same time
if needed. A Hyperscale database is created with a starting size of 10 GB and it starts growing by 10 GB
every 10 minutes, until it reaches the size of 40 GB.
For more information about Hyperscale pricing, see Azure SQL Database Pricing

Compare resource limits


The vCore-based service tiers are differentiated based on database availability and storage type, performance,
and maximum storage size, as described in the following table:

ㅤ GEN ERA L P URP O SE H Y P ERSC A L E B USIN ESS C RIT IC A L

Best for Offers budget oriented Most business workloads. OLTP applications with high
balanced compute and Autoscaling storage size up transaction rate and low IO
storage options. to 100 TB, fast vertical and latency. Offers highest
horizontal compute scaling, resilience to failures and fast
fast database restore. failovers using multiple
synchronously updated
replicas.

Compute size 1 to 80 vCores 1 to 80 vCores1 1 to 80 vCores


ㅤ GEN ERA L P URP O SE H Y P ERSC A L E B USIN ESS C RIT IC A L

Storage type Premium remote storage De-coupled storage with Super-fast local SSD storage
(per instance) local SSD cache (per (per instance)
instance)

Storage size 1 5 GB – 4 TB Up to 100 TB 5 GB – 4 TB

IOPS 500 IOPS per vCore with Hyperscale is a multi-tiered 5,000 IOPS with 200,000
7,000 maximum IOPS architecture with caching at maximum IOPS
multiple levels. Effective
IOPS will depend on the
workload.

Availability 1 replica, no Read Scale-out, Multiple replicas, up to 4 3 replicas, 1 Read Scale-out,


zone-redundant HA Read Scale-out, zone- zone-redundant HA, full
(preview), no local cache redundant HA (preview), local storage
partial local cache

Backups A choice of geo-redundant, A choice of geo-redundant, A choice of geo-redundant,


zone-redundant, or locally zone-redundant, or locally zone-redundant, or locally
redundant backup storage, redundant backup storage, redundant backup storage,
1-35 day retention (default 1-35 day retention (default 1-35 day retention (default
7 days) 7 days) 7 days)

1 Elastic pools aren't supported in the Hyperscale service tier.

NOTE
Short-term backup retention for 1-35 days for Hyperscale databases is now in preview.

Distributed functions architecture


Hyperscale separates the query processing engine from the components that provide long-term storage and
durability for the data. This architecture provides the ability to smoothly scale storage capacity as far as needed
(initial target is 100 TB), and the ability to scale compute resources rapidly.
The following diagram illustrates the different types of nodes in a Hyperscale database:
Learn more about the Hyperscale distributed functions architecture.

Scale and performance advantages


With the ability to rapidly spin up/down additional read-only compute nodes, the Hyperscale architecture allows
significant read scale capabilities and can also free up the primary compute node for serving more write
requests. Also, the compute nodes can be scaled up/down rapidly due to the shared-storage architecture of the
Hyperscale architecture.

Create and manage Hyperscale databases


You can create and manage Hyperscale databases using the Azure portal, Transact-SQL, PowerShell and the
Azure CLI. Refer Quickstart: Create a Hyperscale database.

O P ERAT IO N DETA IL S L EA RN M O RE

Create a Hyperscale database Hyperscale databases are available Find examples to create a Hyperscale
only using the vCore-based database in Quickstart: Create a
purchasing model. Hyperscale database in Azure SQL
Database.

Upgrade an existing database to Migrating an existing database in Learn how to migrate an existing
Hyperscale Azure SQL Database to the Hyperscale database to Hyperscale.
tier is a size of data operation.

Reverse migrate a Hyperscale If you previously migrated an existing Learn how to reverse migrate from
database to the General Purpose Azure SQL Database to the Hyperscale Hyperscale, including the limitations
ser vice tier (preview) service tier, you can reverse migrate for reverse migration.
the database to the General Purpose
service tier within 45 days of the
original migration to Hyperscale.

If you wish to migrate the database to


another service tier, such as Business
Critical, first reverse migrate to the
General Purpose service tier, then
change the service tier.
Database high availability in Hyperscale
As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of
compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends
on the type of failover (planned vs. unplanned), whether zone redundancy is configured, and on the presence of
at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates
the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover
target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-
availability replica as a failover target if one exists, or creates a new primary replica from the pool of available
compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new
primary replica.
For Hyperscale SLA, see SLA for Azure SQL Database.

Back up and restore


Back up and restore operations for Hyperscale databases are file-snapshot based. This enables these operations
to be nearly instantaneous. Since Hyperscale architecture utilizes the storage layer for backup and restore,
processing burden and performance impact to compute replicas are significantly reduced. Learn more in
Hyperscale backups and storage redundancy.

Disaster recovery for Hyperscale databases


If you need to restore a Hyperscale database in Azure SQL Database to a region other than the one it's currently
hosted in, as part of a disaster recovery operation or drill, relocation, or any other reason, the primary method is
to do a geo-restore of the database. Geo-restore is only available when geo-redundant storage (RA-GRS) has
been chosen for storage redundancy.
Learn more in restoring a Hyperscale database to a different region.

Available regions
The Azure SQL Database Hyperscale tier is enabled in the vast majority of Azure regions. If you want to create a
Hyperscale database in a region where Hyperscale isn't enabled by default, you can send an onboarding request
via Azure portal. For instructions, see Request quota increases for Azure SQL Database. When submitting your
request, use the following guidelines:
Use the Region access SQL Database quota type.
In the description, add the compute SKU/total cores including high-availability and named replicas, and
indicate that you're requesting Hyperscale capacity.
Also specify a projection of the total size of all databases over time in TB.

Known limitations
These are the current limitations of the Hyperscale service tier. We're actively working to remove as many of
these limitations as possible.

ISSUE DESC RIP T IO N


ISSUE DESC RIP T IO N

Short-term backup retention Short-term backup retention for 1-35 days for Hyperscale
databases is now in preview. A non-Hyperscale database
can't be restored as a Hyperscale database, and a Hyperscale
database can't be restored as a non-Hyperscale database.

For databases migrated to Hyperscale from other Azure SQL


Database service tiers, pre-migration backups are kept for
the duration of backup retention period of the source
database, including long-term retention policies. Restoring a
pre-migration backup within the backup retention period of
the database is supported programmatically. You can restore
these backups to any non-Hyperscale service tier.

Long-term backup retention Long-term backup retention isn't supported yet. Hyperscale
has a different, snapshot based backup architecture, than
other service tiers.

Service tier change from Hyperscale to another tier isn't Reverse migration to the General Purpose service tier allows
supported directly customers who have recently migrated an existing database
in Azure SQL Database to the Hyperscale service tier to
move back in an emergency, should Hyperscale not meet
their needs. While reverse migration is initiated by a service
tier change, it's essentially a size-of-data move between
different architectures. Databases created in the Hyperscale
service tier aren't eligible for reverse migration. Learn the
limitations for reverse migration.

For databases that don't qualify for reverse migration, the


only way to migrate from Hyperscale to a non-Hyperscale
service tier is to export/import using a bacpac file or other
data movement technologies (Bulk Copy, Azure Data
Factory, Azure Databricks, SSIS, etc.) Bacpac export/import
from Azure portal, from PowerShell using New-
AzSqlDatabaseExport or New-AzSqlDatabaseImport, from
Azure CLI using az sql db export and az sql db import, and
from REST API isn't supported. Bacpac import/export for
smaller Hyperscale databases (up to 200 GB) is supported
using SSMS and SqlPackage version 18.4 and later. For larger
databases, bacpac export/import may take a long time, and
may fail for various reasons.

When changing Azure SQL Database service tier to In some cases, it may be possible to work around this issue
Hyperscale, the operation fails if the database has any data by shrinking the large files to be less than 1 TB before
files larger than 1 TB attempting to change the service tier to Hyperscale. Use the
following query to determine the current size of database
files.
SELECT file_id, name AS file_name, size * 8. / 1024
/ 1024 AS file_size_GB FROM sys.database_files
WHERE type_desc = 'ROWS'
;

SQL Managed Instance Azure SQL Managed Instance isn't currently supported with
Hyperscale databases.

Elastic Pools Elastic Pools aren't currently supported with Hyperscale.


ISSUE DESC RIP T IO N

Migration of databases with In-Memory OLTP objects Hyperscale supports a subset of In-Memory OLTP objects,
including memory-optimized table types, table variables, and
natively compiled modules. However, when any In-Memory
OLTP objects are present in the database being migrated,
migration from Premium and Business Critical service tiers to
Hyperscale isn't supported. To migrate such a database to
Hyperscale, all In-Memory OLTP objects and their
dependencies must be dropped. After the database is
migrated, these objects can be recreated. Durable and non-
durable memory-optimized tables aren't currently supported
in Hyperscale, and must be changed to disk tables.

Query Performance Insights Query Performance Insights is currently not supported for
Hyperscale databases.

Shrink Database DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently


supported for Hyperscale databases.

Database integrity check DBCC CHECKDB isn't currently supported for Hyperscale
databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK
and DBCC CHECKFILEGROUP WITH TABLOCK may be used
as a workaround. See Data Integrity in Azure SQL Database
for details on data integrity management in Azure SQL
Database.

Elastic Jobs Using a Hyperscale database as the Job database isn't


supported. However, elastic jobs can target Hyperscale
databases in the same way as any other database in Azure
SQL Database.

Data Sync Using a Hyperscale database as a Hub or Sync Metadata


database isn't supported. However, a Hyperscale database
can be a member database in a Data Sync topology.

Import Export Import-Export service is currently not supported for


Hyperscale databases.

Next steps
Learn more about Hyperscale in Azure SQL Database in the following articles:
For an FAQ on Hyperscale, see Frequently asked questions about Hyperscale.
For information about service tiers, see Service tiers.
See Overview of resource limits on a server for information about limits at the server and subscription
levels.
For purchasing model limits for a single database, see Azure SQL Database vCore-based purchasing model
limits for a single database.
For a features and comparison list, see SQL common features.
Learn about the Hyperscale distributed functions architecture.
Learn How to manage a Hyperscale database.
Hyperscale distributed functions architecture
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Hyperscale service tier utilizes an architecture with highly scalable storage and compute performance tiers.
This article describes the components that enable customers to quickly scale Hyperscale databases while
benefiting from nearly instantaneous backups and highly scalable transaction logging.

Hyperscale architecture overview


Traditional database engines centralize data management functions in a single process: even so called
distributed databases in production today have multiple copies of a monolithic data engine.
Hyperscale databases follow a different approach. Hyperscale separates the query processing engine, where the
semantics of various data engines diverge, from the components that provide long-term storage and durability
for the data. In this way, storage capacity can be smoothly scaled out as far as needed. The initially supported
storage limit is 100 TB.
High availability and named replicas share the same storage components, so no data copy is required to spin up
a new replica.
The following diagram illustrates the different types of nodes in a Hyperscale database:

A Hyperscale database contains the following types of components: compute nodes, page servers, the log
service, and Azure storage.

Compute
The compute node is where the relational engine lives. The compute node is where language, query, and
transaction processing occur. All user interactions with a Hyperscale database happen through compute nodes.
Compute nodes have SSD-based caches called Resilient Buffer Pool Extension (RBPEX Data Cache). RBPEX Data
Cache is a non-covering data cache that minimizes the number of network round trips required to fetch a page
of data.
Hyperscale databases have one primary compute node where the read-write workload and transactions are
processed. One or more secondary compute nodes act as hot standby nodes for failover purposes. Secondary
compute nodes can serve as read-only compute nodes to offload read workloads when desired. Named replicas
are secondary compute nodes designed to enable massive OLTP read-scale out scenarios and to improve
Hybrid Transactional and Analytical Processing (HTAP) workloads.
The database engine running on Hyperscale compute nodes is the same as in other Azure SQL Database service
tiers. When users interact with the database engine on Hyperscale compute nodes, the supported surface area
and engine behavior are the same as in other service tiers, with the exception of known limitations.

Page server
Page servers are systems representing a scaled-out storage engine. Each page server is responsible for a subset
of the pages in the database. Nominally, each page server controls either up to 128 GB or up to 1 TB of data.
Each page server also has a replica that is kept for redundancy and availability.
The job of a page server is to serve database pages out to the compute nodes on demand, and to keep the
pages updated as transactions update data. Page servers are kept up to date by playing transaction log records
from the log service.
Page servers also maintain covering SSD-based caches to enhance performance. Long-term storage of data
pages is kept in Azure Storage for durability.

Log service
The log service accepts transaction log records that correspond to data changes from the primary compute
replica. Page servers then receive the log records from the log service and apply the changes to their respective
slices of data. Additionally, compute secondary replicas receive log records from the log service and replay only
the changes to pages already in their buffer pool or local RBPEX cache. All data changes from the primary
compute replica are propagated through the log service to all the secondary compute replicas and page servers.
Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite
storage repository. This mechanism removes the need for frequent log truncation. The log service has local
memory and SSD caches to speed up access to log records.
The log on Hyperscale is practically infinite, with the restriction that a single transaction cannot generate more
than 1 TB of log. Additionally, if using Change Data Capture, at most 1 TB of log can be generated since the start
of the oldest active transaction. Avoid unnecessarily large transactions to stay below this limit.

Azure storage
Azure Storage contains all data files in a database. Page servers keep data files in Azure Storage up to date. This
storage is also used for backup purposes and may be replicated between regions based on choice of storage
redundancy.
Backups are implemented using storage snapshots of data files. Restore operations using snapshots are fast
regardless of data size. A database can be restored to any point in time within its backup retention period.
Hyperscale supports configurable storage redundancy. When creating a Hyperscale database, you can choose
read-access geo-redundant storage (RA-GRS), zone-redundant storage (ZRS)(preview), or locally redundant
storage (LRS)(preview) Azure standard storage. The selected storage redundancy option will be used for the
lifetime of the database for both data storage redundancy and backup storage redundancy.

Next steps
Learn more about Hyperscale in the following articles:
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Quickstart: Create a Hyperscale database in Azure SQL Database
Azure SQL Database Hyperscale named replicas FAQ
Hyperscale secondary replicas
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


As described in Distributed functions architecture, Azure SQL Database Hyperscale has two different types of
compute nodes, also referred to as replicas:
Primary: serves read and write operations
Secondary: provides read scale-out, high availability, and geo-replication
Secondary replicas are always read-only, and can be of three different types:
High Availability replica
Geo-replica
Named replica
Each type has a different architecture, feature set, purpose, and cost. Based on the features you need, you may
use just one or even all of the three together.

High Availability replica


A High Availability (HA) replica uses the same page servers as the primary replica, so no data copy is required to
add an HA replica. HA replicas are mainly used to increase database availability; they act as hot standbys for
failover purposes. If the primary replica becomes unavailable, failover to one of the existing HA replicas is
automatic and quick. Connection string doesn't need to change; during failover applications may experience
minimal downtime due to active connections being dropped. As usual for this scenario, proper retry logic is
recommended. Several drivers already provide some degree of automatic retry logic. If you are using .NET, the
latest Microsoft.Data.SqlClient library provides native full support for configurable automatic retry logic.
HA replicas use the same server and database name as the primary replica. Their Service Level Objective is also
always the same as for the primary replica. HA replicas are not visible or manageable as a stand-alone resource
from the portal or from any API.
There can be zero to four HA replicas. Their number can be changed during the creation of a database or after
the database has been created, via the common management endpoints and tools (for example: PowerShell, AZ
CLI, Portal, REST API). Creating or removing HA replicas does not affect active connections on the primary
replica.
Connecting to an HA replica
In Hyperscale databases, the ApplicationIntent argument in the connection string used by the client dictates
whether the connection is routed to the read-write primary replica or to a read-only HA replica. If
ApplicationIntent is set to ReadOnly and the database doesn't have a secondary replica, connection will be
routed to the primary replica and will default to the ReadWrite behavior.

-- Connection string with application intent


Server=tcp:<myserver>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=
<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

All HA replicas are identical in their resource capacity. If more than one HA replica is present, the read-intent
workload is distributed arbitrarily across all available HA replicas. When there are multiple HA replicas, keep in
mind that each one could have different data latency with respect to data changes made on the primary. Each
HA replica uses the same data as the primary on the same set of page servers. However, local data caches on
each HA replica reflect the changes made on the primary via the transaction log service, which forwards log
records from the primary replica to HA replicas. As the result, depending on the workload being processed by
an HA replica, application of log records may happen at different speeds, and thus different replicas could have
different data latency relative to the primary replica.

Named replica
A named replica, just like an HA replica, uses the same page servers as the primary replica. Similar to HA
replicas, there is no data copy needed to add a named replica.
The difference from HA replicas is that named replicas:
appear as regular (read-only) Azure SQL databases in the portal and in API (AZ CLI, PowerShell, T-SQL) calls;
can have database name different from the primary replica, and optionally be located on a different logical
server (as long as it is in the same region as the primary replica);
have their own Service Level Objective that can be set and changed independently from the primary replica;
support for up to 30 named replicas (for each primary replica);
support different authentication for each named replica by creating different logins on logical servers
hosting named replicas.
As a result, named replicas offers several benefits over HA replicas, for what concern read-only workloads:
users connected to a named replica will suffer no disconnection if the primary replica is scaled up or down; at
the same time users connected to primary replica will be unaffected by named replicas scaling up or down
workloads running on any replica, primary or named, will be unaffected by long running queries running on
other replicas
The main goal of named replicas is to enable a broad variety of read scale-out scenarios, and to improve Hybrid
Transactional and Analytical Processing (HTAP) workloads. Examples of how to create such solutions are
available here:
OLTP scale-out sample
Aside from the main scenarios listed above, named replicas offer flexibility and elasticity to also satisfy many
other use cases:
Access Isolation: you can grant access to a specific named replica, but not the primary replica or other named
replicas.
Workload-dependent service level objective: as a named replica can have its own service level objective, it is
possible to use different named replicas for different workloads and use cases. For example, one named
replica could be used to serve Power BI requests, while another can be used to serve data to Apache Spark
for Data Science tasks. Each one can have an independent service level objective and scale independently.
Workload-dependent routing: with up to 30 named replicas, it is possible to use named replicas in groups so
that an application can be isolated from another. For example, a group of four named replicas could be used
to serve requests coming from mobile applications, while another group two named replicas can be used to
serve requests coming from a web application. This approach would allow a fine-grained tuning of
performance and costs for each group.
The following example creates a named replica WideWorldImporters_NamedReplica for database
WideWorldImporters . The primary replica uses service level objective HS_Gen5_4, while the named replica uses
HS_Gen5_2. Both use the same logical server contosoeast . If you prefer to use REST API directly, this option is
also possible: Databases - Create A Database As Named Replica Secondary.
Portal
T-SQL
PowerShell
Azure CLI

1. In the Azure portal, browse to the database for which you want to create the named replica.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .

3. Choose Named replica under Replica configuration, select or create the server for the named replica,
enter named replica database name and configure the Compute + storage options if necessary.
4. Click Review + create , review the information, and then click Create .
5. The named replica deployment process begins.

6. When the deployment is complete, the named replica displays its status.
7. Return to the primary database page, and then select Replicas . Your named replica is listed under
Named replicas .

As there is no data movement involved, in most cases a named replica will be created in about a minute. Once
the named replica is available, it will be visible from the portal or any command-line tool like AZ CLI or
PowerShell. A named replica is usable as a regular read-only database.

NOTE
For frequently asked questions on Hyperscale named replicas, see Azure SQL Database Hyperscale named replicas FAQ.

Connecting to a named replica


To connect to a named replica, you must use the connection string for that named replica, referencing its server
and database names. There is no need to specify the option "ApplicationIntent=ReadOnly" as named replicas are
always read-only.
Just like for HA replicas, even though the primary, HA, and named replicas share the same data on the same set
of page servers, data caches on each named replica are kept in sync with the primary via the transaction log
service, which forwards log records from the primary to named replicas. As the result, depending on the
workload being processed by a named replica, application of the log records may happen at different speeds,
and thus different replicas could have different data latency relative to the primary replica.
Modifying a named replica
You can define the service level objective of a named replica when you create it, via the ALTER DATABASE
command or in any other supported way (Portal, AZ CLI, PowerShell, REST API). If you need to change the
service level objective after the named replica has been created, you can do it using the
ALTER DATABASE ... MODIFY command on the named replica itself. For example, if
WideWorldImporters_NamedReplica is the named replica of WideWorldImporters database, you can do it as shown
below.
Portal
T-SQL
PowerShell
Azure CLI

Open named replica database page, and then select Compute + storage . Update the vCores.
Removing a named replica
To remove a named replica, you drop it just like you would a regular database.

Portal
T-SQL
PowerShell
Azure CLI

Open named replica database page, and choose Delete option.

IMPORTANT
Named replicas will be automatically removed when the primary replica from which they have been created is deleted.

Known issues
Partially incorrect data returned from sys.databases
Row values returned from sys.databases , for named replicas, in columns other than name and database_id ,
may be inconsistent and incorrect. For example, the compatibility_level column for a named replica could be
reported as 140 even if the primary database from which the named replica has been created is set to 150. A
workaround, when possible, is to get the same data using the DATABASEPROPERTYEX() function, which will return
correct data.

Geo-replica
With active geo-replication, you can create a readable secondary replica of the primary Hyperscale database in
the same or in a different Azure region. Geo-replicas must be created on a different logical server. The database
name of a geo-replica always matches the database name of the primary.
When creating a geo-replica, all data is copied from the primary to a different set of page servers. A geo-replica
does not share page servers with the primary, even if they are in the same region. This architecture provides the
necessary redundancy for geo-failovers.
Geo-replicas are used to maintain a transactionally consistent copy of the database via asynchronous
replication. If a geo-replica is in a different Azure region, it can be used for disaster recovery in case of a disaster
or outage in the primary region. Geo-replicas can also be used for geographic read scale-out scenarios.
Geo-replication for Hyperscale database has following current limitations:
Only one geo-replica can be created (in the same or different region).
Point in time restore of the geo-replica is not supported.
Creating a database copy of the geo-replica is not supported.
Secondary of a secondary (also known as "geo-replica chaining") is not supported.

Next steps
Hyperscale service tier
Active geo-replication
Configure Security to allow isolated access to Azure SQL Database Hyperscale Named Replicas
Azure SQL Database Hyperscale named replicas FAQ
Compare vCore and DTU-based purchasing models
of Azure SQL Database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database lets you easily purchase a fully managed platform as a service (PaaS) database engine that
fits your performance and cost needs. Depending on the deployment model you've chosen for Azure SQL
Database, you can select the purchasing model that works for you:
Virtual core (vCore)-based purchasing model (recommended). This purchasing model provides a choice
between a provisioned compute tier and a serverless compute tier. With the provisioned compute tier, you
choose the exact amount of compute resources that are always provisioned for your workload. With the
serverless compute tier, you specify the autoscaling of the compute resources over a configurable compute
range. The serverless compute tier automatically pauses databases during inactive periods when only
storage is billed and automatically resumes databases when activity returns. The vCore unit price per unit of
time is lower in the provisioned compute tier than it is in the serverless compute tier. The Hyperscale service
tier is available for single databases that are using the vCore-based purchasing model.
Database transaction unit (DTU)-based purchasing model. This purchasing model provides bundled compute
and storage packages balanced for common workloads.

Purchasing models
There are two purchasing models:
vCore-based purchasing model is available for both Azure SQL Database and Azure SQL Managed Instance.
The Hyperscale service tier is available for single databases that are using the vCore-based purchasing
model.
DTU-based purchasing model is available for Azure SQL Database.
The following table and chart compares and contrasts the vCore-based and the DTU-based purchasing models:

P URC H A SIN G M O DEL DESC RIP T IO N B EST F O R

DTU-based This model is based on a bundled Customers who want simple,


measure of compute, storage, and I/O preconfigured resource options
resources. Compute sizes are
expressed in DTUs for single databases
and in elastic database transaction
units (eDTUs) for elastic pools. For
more information about DTUs and
eDTUs, see What are DTUs and
eDTUs?.

vCore-based This model allows you to Customers who value flexibility,


independently choose compute and control, and transparency
storage resources. The vCore-based
purchasing model also allows you to
use Azure Hybrid Benefit for SQL
Server to save costs.
vCore purchasing model
A virtual core (vCore) represents a logical CPU and offers you the option to choose between generations of
hardware and the physical characteristics of the hardware (for example, the number of cores, the memory, and
the storage size). The vCore-based purchasing model gives you flexibility, control, transparency of individual
resource consumption, and a straightforward way to translate on-premises workload requirements to the cloud.
This model allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model for SQL Database, you can choose between the General Purpose and
Business Critical service tiers. Review service tiers to learn more. For single databases, you can also choose the
Hyperscale service tier.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage

DTU purchasing model


The DTU-based purchasing model uses a database transaction unit (DTU) to calculate and bundle compute costs.
A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. The DTU-
based purchasing model offers a set of preconfigured bundles of compute resources and included storage to
drive different levels of application performance. If you prefer the simplicity of a preconfigured bundle and fixed
payments each month, the DTU-based model might be more suitable for your needs.
In the DTU-based purchasing model, you can choose between the basic, standard, and premium service tiers for
Azure SQL Database. Review DTU service tiers to learn more.
To convert from the DTU-based purchasing model to the vCore-based purchasing model, see Migrate from DTU
to vCore.

Compute costs
Compute costs are calculated differently based on each purchasing model.
DTU compute costs
In the DTU purchasing model, DTUs are offered in preconfigured bundles of compute resources and included
storage to drive different levels of application performance. You are billed by the number of DTUs you allocate to
your database for your application.
vCore compute costs
In the vCore-based purchasing model, choose between the provisioned compute tier, or the serverless compute
tier. In the provisioned compute tier, the compute cost reflects the total compute capacity that is provisioned for
the application. In the serverless compute tier, compute resources are auto-scaled based on workload capacity
and billed for the amount of compute used, per second.
For single databases, compute resources, I/O, and data and log storage are charged per database. For elastic
pools, these resources are charged per pool. However, backup storage is always charged per database.
Since three additional replicas are automatically allocated in the Business Critical service tier, the price is
approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price
per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.

Storage costs
Storage costs are calculated differently based on each purchasing model.
DTU storage costs
Storage is included in the price of the DTU. It's possible to add extra storage in the standard and premium tiers.
See the Azure SQL Database pricing options for details on provisioning extra storage. Long-term backup
retention is not included, and is billed separately.

vCore storage costs


Different types of storage are billed differently. For data storage, you're charged for the provisioned storage
based upon the maximum database or pool size you select. The cost doesn't change unless you reduce or
increase that maximum. Backup storage is associated with automated backups of your databases and is
allocated dynamically. Increasing your backup retention period may increase the backup storage that's
consumed by your databases.
By default, seven days of automated backups of your databases are copied to a storage account. This storage is
used by full backups, differential backups, and transaction log backups. The size of differential and transaction
log backups depends on the rate of change of the database. A minimum storage amount equal to 100 percent of
the maximum data size for the database is provided at no extra charge. Additional consumption of backup
storage is charged in GB per month.
The cost of backup storage is the same for the Business Critical service tier and the General Purpose service tier
because both tiers use standard storage for backups.
For more information about storage prices, see the pricing page.

Frequently asked questions (FAQs)


Do I need to take my application offline to convert from a DTU -based service tier to a vCore -based service
tier?
No. You don't need to take the application offline. The new service tiers offer a simple online-conversion method
that's similar to the existing process of upgrading databases from the standard to the premium service tier and
the other way around. You can start this conversion by using the Azure portal, PowerShell, the Azure CLI, T-SQL,
or the REST API. See Manage single databases and Manage elastic pools.
Can I convert a database from a service tier in the vCore -based purchasing model to a service tier in the DTU -
based purchasing model?
Yes, you can easily convert your database to any supported performance objective by using the Azure portal,
PowerShell, the Azure CLI, T-SQL, or the REST API. See Manage single databases and Manage elastic pools.
Next steps
For more information about the vCore-based purchasing model, see vCore-based purchasing model.
For more information about the DTU-based purchasing model, see DTU-based purchasing model.
vCore purchasing model - Azure SQL Database
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article reviews the vCore purchasing model for Azure SQL Database. For help choosing between the vCore
and DTU purchasing models, see the differences between the vCore and DTU purchasing models.

Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage

IMPORTANT
Compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per
each database.

The vCore purchasing model used by Azure SQL Database provides several benefits over the DTU purchasing
model:
Higher compute, memory, I/O, and storage limits.
Choice of hardware configuration to better match compute and memory requirements of the workload.
Pricing discounts for Azure Hybrid Benefit (AHB).
Greater transparency in the hardware details that power the compute, that facilitates planning for migrations
from on-premises deployments.
Reserved instance pricing is only available for vCore purchasing model.
Higher scaling granularity with multiple compute sizes available.

Service tiers
Service tier options in the vCore purchasing model include General Purpose, Business Critical, and Hyperscale.
The service tier generally service tier defines hardware, storage type and IOPS, high availability and disaster
recovery options, and other features like memory-optimized object types.
For greater details, review resource limits for logical server, single databases, and pooled databases.
USE C A SE GEN ERA L P URP O SE B USIN ESS C RIT IC A L H Y P ERSC A L E

Best for Most business workloads. Offers business applications Most business workloads
Offers budget-oriented, the highest resilience to with highly scalable storage
balanced, and scalable failures by using several and read-scale
compute and storage isolated replicas, and requirements. Offers higher
options. provides the highest I/O resilience to failures by
performance per database allowing configuration of
replica. more than one isolated
database replica.

Availability 1 replica, no read-scale 3 replicas, 1 read-scale zone-redundant high


replicas, replica, availability (HA) (preview)
zone-redundant high zone-redundant high
availability (HA) availability (HA)

Pricing/billing vCore, reserved storage, vCore, reserved storage, vCore for each replica and
and backup storage are and backup storage are used storage are charged.
charged. charged. IOPS not yet charged.
IOPS is not charged. IOPS is not charged.

Discount models Reserved instances Reserved instances Azure Hybrid Benefit (not
Azure Hybrid Benefit (not Azure Hybrid Benefit (not available on dev/test
available on dev/test available on dev/test subscriptions)
subscriptions) subscriptions) Enterprise and Pay-As-You-
Enterprise and Pay-As-You- Enterprise and Pay-As-You- Go Dev/Test subscriptions
Go Dev/Test subscriptions Go Dev/Test subscriptions

NOTE
For more information on the Service Level Agreement (SLA), see SLA for Azure SQL Database

Choosing a service tier


For information on selecting a service tier for your particular workload, see the following articles:
When to choose the General Purpose service tier
When to choose the Business Critical service tier
When to choose the Hyperscale service tier

Resource limits
For vCore resource limits, see logical servers, single databases, pooled databases.

Compute tiers
Compute tier options in the vCore model include the provisioned and serverless compute tiers.
While the provisioned compute tier provides a specific amount of compute resources that are
continuously provisioned independent of workload activity, the ser verless compute tier auto-scales
compute resources based on workload activity.
While the provisioned compute tier bills for the amount of compute provisioned at a fixed price per hour,
the ser verless compute tier bills for the amount of compute used, per second.

Hardware configuration
Common hardware configurations in the vCore model include standard-series (Gen5), Fsv2-series, and DC-
series. Hardware configuration defines compute and memory limits and other characteristics that impact
workload performance.
Certain hardware configurations such as Gen5 may use more than one type of processor (CPU), as described in
Compute resources (CPU and memory). While a given database or elastic pool tends to stay on the hardware
with the same CPU type for a long time (commonly for multiple months), there are certain events that can cause
a database or pool to be moved to hardware that uses a different CPU type. For example, a database or pool can
be moved if it is scaled up or down to a different service objective, or if the current infrastructure in a datacenter
is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of
life.
For some workloads, a move to a different CPU type can change performance. SQL Database configures
hardware with the goal to provide predictable workload performance even if CPU type changes, keeping
performance changes within a narrow band. However, across the wide spectrum of customer workloads running
in SQL Database, and as new types of CPUs become available, it is possible to occasionally see more noticeable
changes in performance if a database or pool moves to a different CPU type.
Regardless of CPU type used, resource limits for a database or elastic pool remain the same as long as the
database stays on the same service objective.
Gen4/Gen5
Gen4/Gen5 hardware provides balanced compute and memory resources, and is suitable for most database
workloads that do not have higher memory, higher vCore, or faster single vCore requirements as provided
by Fsv2-series or M-series.
For regions where Gen4/Gen5 is available, see Gen4/Gen5 availability.
Fsv2-series
Fsv2-series is a compute optimized hardware configuration delivering low CPU latency and high clock speed
for the most CPU demanding workloads.
Depending on the workload, Fsv2-series can deliver more CPU performance per vCore than other types of
hardware. For example, the 72 vCore Fsv2 compute size can provide more CPU performance than 80 vCores
on Gen5, at lower cost.
Fsv2 provides less memory and tempdb per vCore than other hardware, so workloads sensitive to those
limits may perform better on standard-series (Gen5).
Fsv2-series in only supported in the General Purpose tier. For regions where Fsv2-series is available, see Fsv2-
series availability.
M -series
M-series is a memory optimized hardware configuration for workloads demanding more memory and
higher compute limits than provided by other types of hardware.
M-series provides 29 GB per vCore and up to 128 vCores, which increases the memory limit relative to Gen5
by 8x to nearly 4 TB.
M-series is only supported in the Business Critical tier and does not support zone redundancy. For regions
where M-series is available, see M-series availability.
Azure offer types supported by M-series
For regions where M-series is available, see M-series availability.
There are two subscription requirements for M-series hardware:
1. To create databases or elastic pools on M-series hardware, the subscription must be a paid offer type
including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported
by M-series, see current offers without spending limits.
2. To enable M-series hardware for a subscription and region, a support request must be opened. In the
Azure portal, create a New Support Request to Request a quota increase for your subscription. Use the
"M-series region access" quota type request to indicate access to M-series hardware.
DC -series
DC-series hardware uses Intel processors with Software Guard Extensions (Intel SGX) technology.
DC-series is required for Always Encrypted with secure enclaves, which is not supported with other hardware
configurations.
DC-series is designed for workloads that process sensitive data and demand confidential query processing
capabilities, provided by Always Encrypted with secure enclaves.
DC-series hardware provides balanced compute and memory resources.
DC-series is only supported for Provisioned compute (Serverless is not supported) and does not support zone
redundancy. For regions where DC-series is available, see DC-series availability.
Azure offer types supported by DC-series
To create databases or elastic pools on DC-series hardware, the subscription must be a paid offer type including
Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series,
see current offers without spending limits.
Selecting hardware configuration
You can select hardware configuration for a database or elastic pool in SQL Database at the time of creation. You
can also change hardware configuration of an existing database or elastic pool.
To select a hardware configuration when creating a SQL Database or pool
For detailed information, see Create a SQL Database.
On the Basics tab, select the Configure database link in the Compute + storage section, and then select the
Change configuration link:

Select the desired hardware configuration:


To change hardware configuration of an existing SQL Database or pool
For a database, on the Overview page, select the Pricing tier link:

For a pool, on the Overview page, select Configure .


Follow the steps to change configuration, and select hardware configuration as described in the previous steps.
Hardware availability
Gen4/Gen5

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

Gen5 hardware is available in all public regions worldwide.


Fsv2-series
Fsv2-series is available in the following regions: Australia Central, Australia Central 2, Australia East, Australia
Southeast, Brazil South, Canada Central, East Asia, East US, France Central, India Central, Korea Central, Korea
South, North Europe, South Africa North, Southeast Asia, UK South, UK West, West Europe, West US 2.
M-series
To enable M-series hardware for a subscription and region, a support request must be opened. In the Azure
portal, create a New Support Request to Request a quota increase for your subscription. Use the "M-series
region access" quota type request to indicate access to M-series hardware.
With approved access, M-series is available in the following regions: East US, North Europe, West Europe, West
US 2.
DC-series
DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South,
West Europe, West US.
If you need DC-series in a currently unsupported region, submit a support ticket. On the Basics page, provide
the following:
1. For Issue type , select Technical .
2. For Ser vice type , select SQL Database .
3. For Problem type , select Security, Private and Compliance .
4. For Problem subtype , select Always Encr ypted .

Compute resources (CPU and memory)


The following table compares compute resources in different hardware configurations and compute tiers:

H A RDWA RE C O N F IGURAT IO N CPU M EM O RY

Gen4 - Intel® E5-2673 v3 (Haswell) 2.4- - 7 GB per vCore


GHz processors - Provision up to 168 GB
- Provision up to 24 vCores (physical)

Gen5 Provisioned compute Provisioned compute


- Intel® E5-2673 v4 (Broadwell) 2.3 - 5.1 GB per vCore
GHz, Intel® SP-8160 (Skylake)*, - Provision up to 408 GB
Intel® 8272CL (Cascade Lake) 2.5
GHz*, and Intel® Xeon Platinum Ser verless compute
8307C (Ice Lake)* processors - Auto-scale up to 24 GB per vCore
- Provision up to 80 vCores (hyper- - Auto-scale up to 120 GB max
threaded)

Ser verless compute


- Intel® E5-2673 v4 (Broadwell) 2.3
GHz, Intel® SP-8160 (Skylake)*,
Intel® 8272CL (Cascade Lake) 2.5
GHz*, and Intel Xeon® Platinum
8307C (Ice Lake)* processors
- Auto-scale up to 40 vCores (hyper-
threaded)
H A RDWA RE C O N F IGURAT IO N CPU M EM O RY

Fsv2-series - Intel® 8168 (Skylake) processors - 1.9 GB per vCore


- Featuring a sustained all core turbo - Provision up to 136 GB
clock speed of 3.4 GHz and a
maximum single core turbo clock
speed of 3.7 GHz.
- Provision up to 72 vCores (hyper-
threaded)

M-series - Intel® E7-8890 v3 2.5 GHz and - 29 GB per vCore


Intel® 8280M 2.7 GHz (Cascade - Provision up to 3.7 TB
Lake) processors
- Provision up to 128 vCores (hyper-
threaded)

DC-series - Intel® XEON E-2288G processors 4.5 GB per vCore


- Featuring Intel Software Guard
Extension (Intel SGX))
- Provision up to 8 vCores (physical)

* In the sys.dm_user_db_resource_governance dynamic management view, hardware generation for databases


using Intel® SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel®
8272CL (Cascade Lake) appears as Gen7, and hardware generation for databases using Intel Xeon® Platinum
8307C (Ice Lake) appear as Gen8. For a given compute size and hardware configuration, resource limits are the
same regardless of CPU type (Broadwell, Skylake, Ice Lake, or Cascade Lake).
For more information see resource limits for single databases and elastic pools.

Next steps
To get started, see Creating a SQL Database using the Azure portal
For pricing details, see the Azure SQL Database pricing page
For details about the specific compute and storage sizes available, see:
vCore-based resource limits for Azure SQL Database
vCore-based resource limits for pooled Azure SQL Database
DTU-based purchasing model overview
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this article, learn about the DTU-based purchasing model for Azure SQL Database.
To learn more, review vCore-based purchasing model and compare purchasing models.

Database transaction units (DTUs)


A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. Service
tiers in the DTU-based purchasing model are differentiated by a range of compute sizes with a fixed amount of
included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based
purchasing model provide flexibility of changing compute sizes with minimal downtime; however, there is a
switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated
using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size.
For a single database at a specific compute size within a service tier, Azure SQL Database guarantees a certain
level of resources for that database (independent of any other database). This guarantee provides a predictable
level of performance. The amount of resources allocated for a database is calculated as a number of DTUs and is
a bundled measure of compute, storage, and I/O resources.
The ratio among these resources is originally determined by an online transaction processing (OLTP) benchmark
workload designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any
of these resources, your throughput is throttled, resulting in slower performance and time-outs.
For single databases, the resources used by your workload don't impact the resources available to other
databases in the Azure cloud. Likewise, the resources used by other workloads don't impact the resources
available to your database.

DTUs are most useful for understanding the relative resources that are allocated for databases at different
compute sizes and service tiers. For example:
Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources
available to that database.
A premium service tier P11 database with 1750 DTUs provides 350 times more DTU compute power than a
basic service tier database with 5 DTUs.
To gain deeper insight into the resource (DTU) consumption of your workload, use query-performance insights
to:
Identify the top queries by CPU/duration/execution count that can potentially be tuned for improved
performance. For example, an I/O-intensive query might benefit from in-memory optimization techniques to
make better use of the available memory at a certain service tier and compute size.
Drill down into the details of a query to view its text and its history of resource usage.
Access performance-tuning recommendations that show actions taken by SQL Database Advisor.
Elastic database transaction units (eDTUs)
Rather than provide a dedicated set of resources (DTUs) that might not always be needed, you can place these
databases into an elastic pool. The databases in an elastic pool use a single instance of the database engine and
share the same pool of resources.
The shared resources in an elastic pool are measured by elastic database transaction units (eDTUs). Elastic pools
provide a simple, cost-effective solution to manage performance goals for multiple databases that have widely
varying and unpredictable usage patterns. An elastic pool guarantees that all the resources can't be consumed
by one database in the pool, while ensuring that each database in the pool always has a minimum amount of
necessary resources available.
A pool is given a set number of eDTUs for a set price. In the elastic pool, individual databases can autoscale
within the configured boundaries. A database under a heavier load will consume more eDTUs to meet demand.
Databases under lighter loads will consume fewer eDTUs. Databases with no load will consume no eDTUs.
Because resources are provisioned for the entire pool, rather than per database, elastic pools simplify your
management tasks and provide a predictable budget for the pool.
You can add additional eDTUs to an existing pool with minimal database downtime. Similarly, if you no longer
need extra eDTUs, remove them from an existing pool at any time. You can also add databases to or remove
databases from a pool at any time. To reserve eDTUs for other databases, limit the number of eDTUs databases
can use under a heavy load. If a database has consistently high resource utilization that impacts other databases
in the pool, move it out of the pool and configure it as a single database with a predictable amount of required
resources.
Workloads that benefit from an elastic pool of resources
Pools are well suited for databases with a low resource-utilization average and relatively infrequent utilization
spikes. For more information, see When should you consider a SQL Database elastic pool?.

Determine the number of DTUs needed by a workload


If you want to migrate an existing on-premises or SQL Server virtual machine workload to SQL Database, see
SKU recommendations to approximate the number of DTUs needed. For an existing SQL Database workload, use
query-performance insights to understand your database-resource consumption (DTUs) and gain deeper
insights for optimizing your workload. The sys.dm_db_resource_stats dynamic management view (DMV) lets
you view resource consumption for the last hour. The sys.resource_stats catalog view displays resource
consumption for the last 14 days, but at a lower fidelity of five-minute averages.

Determine DTU utilization


To determine the average percentage of DTU/eDTU utilization relative to the DTU/eDTU limit of a database or an
elastic pool, use the following formula:
avg_dtu_percent = MAX(avg_cpu_percent, avg_data_io_percent, avg_log_write_percent)

The input values for this formula can be obtained from sys.dm_db_resource_stats, sys.resource_stats, and
sys.elastic_pool_resource_stats DMVs. In other words, to determine the percentage of DTU/eDTU utilization
toward the DTU/eDTU limit of a database or an elastic pool, pick the largest percentage value from the following:
avg_cpu_percent , avg_data_io_percent , and avg_log_write_percent at a given point in time.

NOTE
The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However,
because the SQL Database engine typically uses all available memory for its data cache to improve performance, the
avg_memory_usage_percent value will usually be close to 100 percent, regardless of current database load. Therefore,
even though memory does indirectly influence the DTU limit, it is not used in the DTU utilization formula.

Hardware configuration
In the DTU-based purchasing model, customers cannot choose the hardware configuration used for their
databases. While a given database usually stays on a specific type of hardware for a long time (commonly for
multiple months), there are certain events that can cause a database to be moved to different hardware.
For example, a database can be moved to different hardware if it's scaled up or down to a different service
objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used
hardware is being decommissioned due to its end of life.
If a database is moved to different hardware, workload performance can change. The DTU model guarantees
that the throughput and response time of the DTU benchmark workload will remain substantially identical as the
database moves to a different hardware type, as long as its service objective (the number of DTUs) stays the
same.
However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using
different hardware for the same service objective can be more pronounced. Different workloads may benefit
from different hardware configurations and features. Therefore, for workloads other than the DTU benchmark,
it's possible to see performance differences if the database moves from one type of hardware to another.
Customers can use the vCore model to choose their preferred hardware configuration during database creation
and scaling. In the vCore model, detailed resource limits of each service objective in each hardware
configuration are documented for single databases and elastic pools. For more information about hardware in
the vCore model, see Hardware configuration for SQL Database or Hardware configuration for SQL Managed
Instance.

Compare service tiers


Choosing a service tier depends primarily on business continuity, storage, and performance requirements.

B A SIC STA N DA RD P REM IUM

Target workload Development and Development and Development and


production production production

Uptime SL A 99.99% 99.99% 99.99%

Maximum backup 7 days 35 days 35 days


retention

CPU Low Low, Medium, High Medium, High

IOPS (approximate) * 1-4 IOPS per DTU 1-4 IOPS per DTU >25 IOPS per DTU
B A SIC STA N DA RD P REM IUM

IO latency 5 ms (read), 10 ms (write) 5 ms (read), 10 ms (write) 2 ms (read/write)


(approximate)

Columnstore indexing N/A S3 and above Supported

In-memor y OLTP N/A N/A Supported

* All read and write IOPS against data files, including background IO (checkpoint and lazy writer)

IMPORTANT
The Basic, S0, S1 and S2 service objectives provide less than one vCore (CPU). For CPU-intensive workloads, a service
objective of S3 or greater is recommended.
In the Basic, S0, and S1 service objectives, database files are stored in Azure Standard Storage, which uses hard disk drive
(HDD)-based storage media. These service objectives are best suited for development, testing, and other infrequently
accessed workloads that are less sensitive to performance variability.

TIP
To see actual resource governance limits for a database or elastic pool, query the sys.dm_user_db_resource_governance
view.

NOTE
You can get a free database in Azure SQL Database at the Basic service tier in conjunction with an Azure free account to
explore Azure. For information, see Create a managed cloud database with your Azure free account.

Resource limits
Resource limits differ for single and pooled databases.
Single database storage limits
Compute sizes are expressed in terms of Database Transaction Units (DTUs) for single databases and elastic
Database Transaction Units (eDTUs) for elastic pools. To learn more, review Resource limits for single databases.

B A SIC STA N DA RD P REM IUM

Maximum storage size 2 GB 1 TB 4 TB

Maximum DTUs 5 3000 4000

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Elastic pool limits


To learn more, review Resource limits for pooled databases.
B A SIC STA N DA RD P REM IUM

Maximum storage size 2 GB 1 TB 1 TB


per database

Maximum storage size 156 GB 4 TB 4 TB


per pool

Maximum eDTUs per 5 3000 4000


database

Maximum eDTUs per 1600 3000 4000


pool

Maximum number of 500 500 100


databases per pool

IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
manage file space in Azure SQL Database.

DTU Benchmark
Physical characteristics (CPU, memory, IO) associated with each DTU measure are calibrated using a benchmark
that simulates real-world database workload.
Learn about the schema, transaction types used, workload mix, users and pacing, scaling rules, and metrics
associated with the DTU benchmark.

Compare DTU-based and vCore purchasing models


While the DTU-based purchasing model is based on a bundled measure of compute, storage, and I/O resources,
by comparison the vCore purchasing model for Azure SQL Database allows you to independently choose and
scale compute and storage resources.
The vCore-based purchasing model also allows you to use Azure Hybrid Benefit for SQL Server to save costs,
and offers Serverless and Hyperscale options for Azure SQL Database that are not available in the DTU-based
purchasing model.
Learn more in Compare vCore and DTU-based purchasing models of Azure SQL Database.

Next steps
Learn more about purchasing models and related concepts in the following articles:
For details on specific compute sizes and storage size choices available for single databases, see SQL
Database DTU-based resource limits for single databases.
For details on specific compute sizes and storage size choices available for elastic pools, see SQL Database
DTU-based resource limits.
For information on the benchmark associated with the DTU-based purchasing model, see DTU benchmark.
Compare vCore and DTU-based purchasing models of Azure SQL Database.
Azure SQL Database and Azure Synapse Analytics
connectivity architecture
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


This article explains architecture of various components that direct network traffic to a server in Azure SQL
Database or Azure Synapse Analytics. It also explains different connection policies and how it impacts clients
connecting from within Azure and clients connecting from outside of Azure.
This article does not apply to Azure SQL Managed Instance . Refer to Connectivity architecture for a managed
instance.

Connectivity architecture
The following diagram provides a high-level overview of the connectivity architecture.

The following steps describe how a connection is established to Azure SQL Database:
Clients connect to the gateway, that has a public IP address and listens on port 1433.
The gateway, depending on the effective connection policy, redirects or proxies the traffic to the right
database cluster.
Inside the database cluster, traffic is forwarded to the appropriate database.

Connection policy
Servers in SQL Database and Azure Synapse support the following three options for the server's connection
policy setting:
Redirect (recommended): Clients establish connections directly to the node hosting the database,
leading to reduced latency and improved throughput. For connections to use this mode, clients need to:
Allow outbound communication from the client to all Azure SQL IP addresses in the region on ports in
the range of 11000 to 11999. Use the Service Tags for SQL to make this easier to manage.
Allow outbound communication from the client to Azure SQL Database gateway IP addresses on port
1433.
Proxy: In this mode, all connections are proxied via the Azure SQL Database gateways, leading to
increased latency and reduced throughput. For connections to use this mode, clients need to allow
outbound communication from the client to Azure SQL Database gateway IP addresses on port 1433.
Default: This is the connection policy in effect on all servers after creation unless you explicitly alter the
connection policy to either Proxy or Redirect . The default policy is Redirect for all client connections
originating inside of Azure (for example, from an Azure Virtual Machine) and Proxy for all client
connections originating outside (for example, connections from your local workstation).
We highly recommend the Redirect connection policy over the Proxy connection policy for the lowest latency
and highest throughput. However, you will need to meet the additional requirements for allowing network traffic
as outlined above. If the client is an Azure Virtual Machine, you can accomplish this using Network Security
Groups (NSG) with service tags. If the client is connecting from a workstation on-premises then you may need
to work with your network admin to allow network traffic through your corporate firewall.

Connectivity from within Azure


If you are connecting from within Azure your connections have a connection policy of Redirect by default. A
policy of Redirect means that after the TCP session is established to Azure SQL Database, the client session is
then redirected to the right database cluster with a change to the destination virtual IP from that of the Azure
SQL Database gateway to that of the cluster. Thereafter, all subsequent packets flow directly to the cluster,
bypassing the Azure SQL Database gateway. The following diagram illustrates this traffic flow.

Connectivity from outside of Azure


If you are connecting from outside Azure, your connections have a connection policy of Proxy by default. A
policy of Proxy means that the TCP session is established via the Azure SQL Database gateway and all
subsequent packets flow via the gateway. The following diagram illustrates this traffic flow.

IMPORTANT
Additionally open TCP ports 1434 and 14000-14999 to enable Connecting with DAC

Gateway IP addresses
The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the
process outlined at Azure SQL Database traffic migration to newer Gateways. We strongly encourage customers
to use the Gateway IP address subnets in order to not be impacted by this activity in a region.

IMPORTANT
Logins for SQL Database or Azure Synapse can land on any of the Gateways in a region . For consistent connectivity
to SQL Database or Azure Synapse, allow network traffic to and from ALL Gateway IP addresses and Gateway IP address
subnets for the region.

REGIO N N A M E GAT EWAY IP A DDRESSES GAT EWAY IP A DDRESS SUB N ET S

Australia Central 20.36.105.0, 20.36.104.6, 20.36.104.7 20.36.105.32/29

Australia Central 2 20.36.113.0, 20.36.112.6 20.36.113.32/29

Australia East 13.75.149.87, 40.79.161.1, 13.70.112.32/29, 40.79.160.32/29,


13.70.112.9 40.79.168.32/29

Australia Southeast 191.239.192.109, 13.73.109.251, 13.77.49.32/29


13.77.48.10, 13.77.49.32
REGIO N N A M E GAT EWAY IP A DDRESSES GAT EWAY IP A DDRESS SUB N ET S

Brazil South 191.233.200.14, 191.234.144.16, 191.233.200.32/29,


191.234.152.3 191.234.144.32/29

Canada Central 40.85.224.249, 52.246.152.0, 13.71.168.32/29, 20.38.144.32/29,


20.38.144.1 52.246.152.32/29

Canada East 40.86.226.166, 52.242.30.154, 40.69.105.32/29


40.69.105.9 , 40.69.105.10

Central US 13.67.215.62, 52.182.137.15, 104.208.21.192/29, 13.89.168.192/29,


104.208.21.1, 13.89.169.20 52.182.136.192/29

China East 139.219.130.35 52.130.112.136/29

China East 2 40.73.82.1 52.130.120.88/29

China North 139.219.15.17 52.130.128.88/29

China North 2 40.73.50.0 52.130.40.64/29

East Asia 52.175.33.150, 13.75.32.4, 13.75.32.192/29, 13.75.33.192/29


13.75.32.14, 20.205.77.200,
20.205.83.224

East US 40.121.158.30, 40.79.153.12, 20.42.65.64/29, 20.42.73.0/29,


40.78.225.32 52.168.116.64/29

East US 2 40.79.84.180, 52.177.185.181, 104.208.150.192/29,


52.167.104.0, 191.239.224.107, 40.70.144.192/29, 52.167.104.192/29
104.208.150.3, 40.70.144.193

France Central 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.136.32/29, 40.79.144.32/29


40.79.145.12

France South 40.79.177.0, 40.79.177.10 40.79.176.40/29, 40.79.177.32/29


,40.79.177.12

Germany West Central 51.116.240.0, 51.116.248.0, 51.116.152.32/29, 51.116.240.32/29,


51.116.152.0 51.116.248.32/29

Central India 104.211.96.159, 104.211.86.30 , 104.211.86.32/29, 20.192.96.32/29


104.211.86.31, 40.80.48.32,
20.192.96.32

South India 104.211.224.146 40.78.192.32/29, 40.78.193.32/29

West India 104.211.160.80, 104.211.144.4 104.211.144.32/29,


104.211.145.32/29

Japan East 13.78.61.196, 40.79.184.8, 13.78.104.32/29, 40.79.184.32/29,


13.78.106.224, 40.79.192.5, 40.79.192.32/29
13.78.104.32, 40.79.184.32
REGIO N N A M E GAT EWAY IP A DDRESSES GAT EWAY IP A DDRESS SUB N ET S

Japan West 104.214.148.156, 40.74.100.192, 40.74.96.32/29


40.74.97.10

Korea Central 52.231.32.42, 52.231.17.22 20.194.64.32/29,20.44.24.32/29,


,52.231.17.23, 20.44.24.32, 52.231.16.32/29
20.194.64.33

Korea South 52.231.200.86, 52.231.151.96

North Central US 23.96.178.199, 23.98.55.75, 52.162.105.192/29


52.162.104.33, 52.162.105.9

North Europe 40.113.93.91, 52.138.224.1, 13.69.233.136/29, 13.74.105.192/29,


13.74.104.113 52.138.229.72/29

Norway East 51.120.96.0, 51.120.96.33, 51.120.96.32/29


51.120.104.32, 51.120.208.32

Norway West 51.120.216.0 51.120.217.32/29

South Africa North 102.133.152.0, 102.133.120.2, 102.133.120.32/29,


102.133.152.32 102.133.152.32/29,
102.133.248.32/29

South Africa West 102.133.24.0 102.133.25.32/29

South Central US 13.66.62.124, 104.214.16.32, 20.45.121.32/29, 20.49.88.32/29,


20.45.121.1, 20.49.88.1 20.49.89.32/29, 40.124.64.136/29

South East Asia 104.43.15.0, 40.78.232.3, 13.67.16.192/29, 23.98.80.192/29,


13.67.16.193 40.78.232.192/29

Switzerland North 51.107.56.0, 51.107.57.0 51.107.56.32/29, 51.103.203.192/29,


20.208.19.192/29, 51.107.242.32/27

Switzerland West 51.107.152.0, 51.107.153.0 51.107.153.32/29

UAE Central 20.37.72.64 20.37.72.96/29, 20.37.73.96/29

UAE North 65.52.248.0 40.120.72.32/29, 65.52.248.32/29

UK South 51.140.184.11, 51.105.64.0, 51.105.64.32/29, 51.105.72.32/29,


51.140.144.36, 51.105.72.32 51.140.144.32/29

UK West 51.141.8.11, 51.140.208.96, 51.140.208.96/29, 51.140.209.32/29


51.140.208.97

West Central US 13.78.145.25, 13.78.248.43, 13.71.193.32/29


13.71.193.32, 13.71.193.33

West Europe 40.68.37.158, 104.40.168.105, 104.40.169.32/29, 13.69.112.168/29,


52.236.184.163 52.236.184.32/29
REGIO N N A M E GAT EWAY IP A DDRESSES GAT EWAY IP A DDRESS SUB N ET S

West US 104.42.238.205, 13.86.216.196 13.86.217.224/29

West US 2 13.66.226.202, 40.78.240.8, 13.66.136.192/29, 40.78.240.192/29,


40.78.248.10 40.78.248.192/29

West US 3 20.150.168.0, 20.150.184.2 20.150.168.32/29, 20.150.176.32/29,


20.150.184.32/29

Next steps
For information on how to change the Azure SQL Database connection policy for a server, see conn-policy.
For information about Azure SQL Database connection behavior for clients that use ADO.NET 4.5 or a later
version, see Ports beyond 1433 for ADO.NET 4.5.
For general application development overview information, see SQL Database Application Development
Overview.
Azure SQL connectivity settings
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
This article introduces settings that control connectivity to the server for Azure SQL Database and dedicated
SQL pool (formerly SQL DW) in Azure Synapse Analytics. These settings apply to all SQL Database and
dedicated SQL pool (formerly SQL DW) databases associated with the server.

IMPORTANT
This article doesn't apply to Azure SQL Managed Instance. This article also does not apply to dedicated SQL pools in
Azure Synapse Analytics workspaces. See Azure Synapse Analytics IP firewall rules for guidance on how to configure IP
firewall rules for Azure Synapse Analytics with workspaces.

The connectivity settings are accessible from the Firewalls and vir tual networks screen as shown in the
following screenshot:

NOTE
These settings take effect immediately after they're applied. Your customers might experience connection loss if they don't
meet the requirements for each setting.

Deny public network access


The default for this setting is No so that customers can connect by using either public endpoints (with IP-based
server- level firewall rules or with virtual-network firewall rules) or private endpoints (by using Azure Private
Link), as outlined in the network access overview.
When Deny public network access is set to Yes , only connections via private endpoints are allowed. All
connections via public endpoints will be denied with an error message similar to:
Error 47073
An instance-specific error occurred while establishing a connection to SQL Server.
The public network interface on this server is not accessible.
To connect to this server, use the Private Endpoint from inside your virtual network.

When Deny public network access is set to Yes , any attempts to add, remove or edit any firewall rules will be
denied with an error message similar to:

Error 42101
Unable to create or modify firewall rules when public network interface for the server is disabled.
To manage server or database level firewall rules, please enable the public network interface.

Ensure that Deny public network access is set to No to be able to add, remove or edit any firewall rules for
Azure Sql

Change public network access via PowerShell


IMPORTANT
Azure SQL Database still supports the PowerShell Azure Resource Manager module, but all future development is for the
Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.

The following PowerShell script shows how to Get and Set the Public Network Access property at the
server level:

# Get the Public Network Access property


(Get-AzSqlServer -ServerName sql-server-name -ResourceGroupName sql-server-group).PublicNetworkAccess

# Update Public Network Access to Disabled


$SecureString = ConvertTo-SecureString "password" -AsPlainText -Force

Set-AzSqlServer -ServerName sql-server-name -ResourceGroupName sql-server-group -SqlAdministratorPassword


$SecureString -PublicNetworkAccess "Disabled"

Change public network access via CLI


IMPORTANT
All scripts in this section require the Azure CLI.

Azure CLI in a Bash shell


The following CLI script shows how to change the Public Network Access setting in a Bash shell:

# Get current setting for Public Network Access


az sql server show -n sql-server-name -g sql-server-group --query "publicNetworkAccess"

# Update setting for Public Network Access


az sql server update -n sql-server-name -g sql-server-group --set publicNetworkAccess="Disabled"

Minimal TLS version


The minimal Transport Layer Security (TLS) version setting allows customers to choose which version of TLS
their SQL database uses.
Currently, we support TLS 1.0, 1.1, and 1.2. Setting a minimal TLS version ensures that newer TLS versions are
supported. For example, choosing a TLS version 1.1 means only connections with TLS 1.1 and 1.2 are accepted,
and connections with TLS 1.0 are rejected. After you test to confirm that your applications support it, we
recommend setting the minimal TLS version to 1.2. This version includes fixes for vulnerabilities in previous
versions and is the highest version of TLS that's supported in Azure SQL Database.

IMPORTANT
The default for the minimal TLS version is to allow all versions. After you enforce a version of TLS, it's not possible to
revert to the default.

For customers with applications that rely on older versions of TLS, we recommend setting the minimal TLS
version according to the requirements of your applications. For customers that rely on applications to connect
by using an unencrypted connection, we recommend not setting any minimal TLS version.
For more information, see TLS considerations for SQL Database connectivity.
After you set the minimal TLS version, login attempts from customers who are using a TLS version lower than
the minimal TLS version of the server will fail with the following error:

Error 47072
Login failed with invalid TLS version

Set the minimal TLS version in Azure portal


In the Azure portal, go to your SQL ser ver resource. Under the Security settings, select Firewalls and
vir tual networks . Select the Minimum TLS Version desired for all SQL Databases associated with the server,
and select Save .

Set the minimal TLS version via PowerShell


IMPORTANT
Azure SQL Database still supports the PowerShell Azure Resource Manager module, but all future development is for the
Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.

The following PowerShell script shows how to Get and Set the Minimal TLS Version property at the logical
server level:

# Get the Minimal TLS Version property


(Get-AzSqlServer -ServerName sql-server-name -ResourceGroupName sql-server-group).MinimalTlsVersion

# Update Minimal TLS Version to 1.2


$SecureString = ConvertTo-SecureString "password" -AsPlainText -Force

Set-AzSqlServer -ServerName sql-server-name -ResourceGroupName sql-server-group -SqlAdministratorPassword


$SecureString -MinimalTlsVersion "1.2"

Set the minimal TLS version via the Azure CLI


IMPORTANT
All scripts in this section require the Azure CLI.

Azure CLI in a Bash shell


The following CLI script shows how to change the Minimal TLS Version setting in a Bash shell:

# Get current setting for Minimal TLS Version


az sql server show -n sql-server-name -g sql-server-group --query "minimalTlsVersion"

# Update setting for Minimal TLS Version


az sql server update -n sql-server-name -g sql-server-group --set minimalTlsVersion="1.2"

Change the connection policy


Connection policy determines how customers connect to Azure SQL Database.

Change the connection policy via PowerShell


IMPORTANT
Azure SQL Database still supports the PowerShell Azure Resource Manager module, but all future development is for the
Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.

The following PowerShell script shows how to change the connection policy by using PowerShell:

# Get SQL Server ID


$sqlserverid=(Get-AzSqlServer -ServerName sql-server-name -ResourceGroupName sql-server-group).ResourceId

# Set URI
$id="$sqlserverid/connectionPolicies/Default"

# Get current connection policy


(Get-AzResource -ResourceId $id -ApiVersion 2014-04-01 -Verbose).Properties.ConnectionType

# Update connection policy


Set-AzResource -ResourceId $id -Properties @{"connectionType" = "Proxy"} -f

Change the connection policy via the Azure CLI


IMPORTANT
All scripts in this section require the Azure CLI.

Azure CLI in a Bash shell


The following CLI script shows how to change the connection policy in a Bash shell:
# Get SQL Server ID
sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv)

# Set URI
ids="$sqlserverid/connectionPolicies/Default"

# Get current connection policy


az resource show --ids $ids

# Update connection policy


az resource update --ids $ids --set properties.connectionType=Proxy

Azure CLI from a Windows command prompt


The following CLI script shows how to change the connection policy from a Windows command prompt (with
the Azure CLI installed):

# Get SQL Server ID and set URI


FOR /F "tokens=*" %g IN ('az sql server show --resource-group myResourceGroup-571418053 --name server-
538465606 --query "id" -o tsv') do (SET sqlserverid=%g/connectionPolicies/Default)

# Get current connection policy


az resource show --ids %sqlserverid%

# Update connection policy


az resource update --ids %sqlserverid% --set properties.connectionType=Proxy

Next steps
For an overview of how connectivity works in Azure SQL Database, refer to Connectivity architecture.
For information on how to change the connection policy for a server, see conn-policy.
What is the local development experience for Azure
SQL Database?
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides an overview of the local development experience for Azure SQL Database.
To get started, see how to set up a dev environment and the Quickstart.

Overview
The Azure SQL Database local development experience is a combination of tools and procedures that empowers
application developers and database professionals to design, edit, build/validate, publish, and run database
schemas for databases while working offline.
The Azure SQL Database local development experience consists of extensions for Visual Studio Code and Azure
Data Studio and an Azure SQL Database emulator (preview). Extensions allow users to create, build and source
control Database Projects while working offline with Azure SQL Database emulator, which is a containerized
database with close fidelity to the Azure SQL Database public service.
The local development experience uses the emulator as a runtime host for Database Projects that can be
published and tested locally as part of a developer's inner loop.
A common example would be to push a project to a GitHub repository that leverages GitHub Actions to
automate database creation or apply schema changes to a database in Azure SQL Database. The Azure SQL
Database emulator itself can also be used as part of Continuous Integration and Continuous Deployment
(CI/CD) processes to automate database validation and testing.

NOTE
To learn more about upcoming use cases and support for new scenarios, review the Devs's Corner blog.
Visual Studio Code and Azure Data Studio extensions
To use the Azure SQL Database local development experience, install the appropriate extension depending on
whether you are using Visual Studio Code or Azure Data Studio.

EXT EN SIO N DESC RIP T IO N VISUA L ST UDIO C O DE A Z URE DATA ST UDIO

The mssql extension for Enables you to connect and Install the mssql extension. There is no need to install
Visual Studio Code run queries and test scripts the mssql extension
against a database. The because this functionality is
database may be running in provided natively by Azure
the Azure SQL Database Data Studio.
emulator locally, or it may
be a database in the global
Azure SQL Database
service.

SQL Database Projects Enables you to capture an The SQL Database Projects Install the SQL Database
extension (Preview) existing database schema extension is bundled into Projects extension.
and/or design new the mssql extension for
database objects using a Visual Studio Code and is
declarative database design installed or updated
model. You can commit a automatically when the
database schema to version mssql extension is updated
control. You can also or installed.
publish a database schema
to a database running in
the Azure SQL Database
emulator, or to a database
running in the global Azure
SQL Database service. You
may publish an entire
database, or incremental
changes to a database.

To learn how to install the extensions, review Set up a local development environment.

Azure SQL Database emulator


The Azure SQL Database emulator (preview) is a containerized database with close fidelity to the Azure SQL
Database public service. Application developers and database professionals can pull the Azure SQL Database
emulator from an image in the Microsoft Container registry and run it on their own workstation. The Azure SQL
Database emulator enables faster local and offline development workflows for Azure SQL Database.
You can also use the Azure SQL Database emulator as part of local or hosted CI/CD pipelines to support unit
and integration testing, without the need to use the global Azure SQL Database cloud service.
Learn more in Azure SQL Database emulator.

Next steps
Learn more about the local development experience for Azure SQL Database:
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator (preview)
Introducing the Azure SQL Database emulator
(preview)
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article introduces the Azure SQL Database emulator (preview), which provides the ability to locally validate
database and query design together with client application code in a simple and frictionless model as part of the
application development process. The Azure SQL Database emulator is a critical component that speeds up the
overall workflow for application developers and database professionals. You can use the Azure SQL Database
emulator as part of the local development experience for Azure SQL Database.
To get started, see how to set up a dev environment and the Quickstart.

What is the Azure SQL Database emulator?


The Azure SQL Database emulator is a local containerized database for development and testing. The emulator
is a combination of a container image that provides a high-fidelity emulator for Azure SQL Database with a
Visual Studio Code extension. This combination enables developers to pull the Azure SQL Database emulator
from the Microsoft Container Registry and run it on their own workstation to enable faster local and offline
development workflows.
This Azure SQL Database emulator image can also be easily used as part of local or hosted CI/CD pipelines to
provide support for unit and integration testing without the need for hitting public cloud service every time.
Within Visual Studio Code, developers can list, start, and stop existing instances of the Azure SQL Database
emulator using the Docker extension, configure details like local ports or persistent volumes, and manage all
other aspects of the emulator.

This local development experience is supported on Windows, macOS and Linux, and is available on x64 and
ARM64-based hardware platforms.
Once validation and testing have succeeded, developers can directly deploy their SQL Database Projects from
within Visual Studio Code to a database in Azure SQL Database and leverage additional capabilities like
Serverless.

Limitations
The current implementation of the Azure SQL Database emulator is derived from an Azure SQL Edge base
image, as it offers a cross-hardware platform compatibility and smaller image size. This means that compared to
the Azure SQL Database public service, some specific features may not be available. For example, the Azure SQL
Database emulator does not support all features that are supported across multiple Azure SQL Database service
tiers. Limitations include:
Spatial data types
Memory-optimized tables in in-memory OLTP
HierarchyID data type
Full-text search
Azure Active Directory Integration
While lack of compatibility with some of these features can be impactful, the emulator is still a great tool for
local development and testing and supports most of the Azure SQL Database programmability surface.
In future releases, we plan to increase feature parity and provide higher-fidelity with Azure SQL Database public
service.
Refer to the Azure SQL Edge documentation for more specific details.

Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Quickstart: Create a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Publish a Database Project for Azure SQL Database to the local emulator
Plan for Intel SGX enclaves and attestation in Azure
SQL Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Always Encrypted with secure enclaves in Azure SQL Database uses Intel Software Guard Extensions (Intel SGX)
enclaves and requires Microsoft Azure Attestation.

Plan for Intel SGX in Azure SQL Database


Intel SGX is a hardware-based trusted execution environment technology. Intel SGX is available for databases
that use the vCore model and DC-series hardware. Therefore, to ensure you can use Always Encrypted with
secure enclaves in your database, you need to either select the DC-series hardware when you create the
database, or you can update your existing database to use the DC-series hardware.

NOTE
Intel SGX is not available in hardware other than DC-series. For example, Intel SGX is not available for Gen5 hardware, and
it is not available for databases using the DTU model.

IMPORTANT
Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure
you understand its performance limitations. For details, see DC-series.

Plan for attestation in Azure SQL Database


Microsoft Azure Attestation is a solution for attesting Trusted Execution Environments (TEEs), including Intel SGX
enclaves in Azure SQL databases using DC-series hardware.
To use Azure Attestation for attesting Intel SGX enclaves in Azure SQL Database, you need to create an
attestation provider and configure it with the Microsoft-provided attestation policy. See Configure attestation for
Always Encrypted using Azure Attestation

Roles and responsibilities when configuring SGX enclaves and


attestation
Configuring your environment to support Intel SGX enclaves and attestation for Always Encrypted in Azure SQL
Database involves setting up components of different types: Microsoft Azure Attestation, Azure SQL Database,
and applications that trigger enclave attestation. Configuring components of each type is performed by users
assuming one of the below distinct roles:
Attestation administrator - creates an attestation provider in Microsoft Azure Attestation, authors the
attestation policy, grants Azure SQL logical server access to the attestation provider, and shares the
attestation URL that points to the policy to application administrators.
Azure SQL Database administrator - enables SGX enclaves in databases by selecting the DC-series hardware,
and provides the attestation administrator with the identity of the Azure SQL logical server that needs to
access the attestation provider.
Application administrator - configures applications with the attestation URL obtained from the attestation
administrator.
In production environments (handling real sensitive data), it is important your organization adheres to role
separation when configuring attestation, where each distinct role is assumed by different people. In particular, if
the goal of deploying Always Encrypted in your organization is to reduce the attack surface area by ensuring
Azure SQL Database administrators cannot access sensitive data, Azure SQL Database administrators should not
control attestation policies.

Next steps
Enable Intel SGX for your Azure SQL database

See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Enable Intel SGX for Always Encrypted for your
Azure SQL Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Always Encrypted with secure enclaves in Azure SQL Database uses Intel Software Guard Extensions (Intel SGX)
enclaves. For Intel SGX to be available, the database must use the vCore model and DC-series hardware.
Configuring the DC-series hardware to enable Intel SGX enclaves is the responsibility of the Azure SQL Database
administrator. See Roles and responsibilities when configuring SGX enclaves and attestation.

NOTE
Intel SGX is not available in hardware configurations other than DC-series. For example, Intel SGX is not available for Gen5
hardware, and it is not available for databases using the DTU model.

IMPORTANT
Before you configure the DC-series hardware for your database, check the regional availability of DC-series and make sure
you understand its performance limitations. For more information, see DC-series.

For detailed instructions for how to configure a new or existing database to use a specific hardware
configuration, see Hardware configuration.

Next steps
Configure Azure Attestation for your Azure SQL database server

See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Configure attestation for Always Encrypted using
Azure Attestation
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Microsoft Azure Attestation is a solution for attesting Trusted Execution Environments (TEEs), including Intel
Software Guard Extensions (Intel SGX) enclaves.
To use Azure Attestation for attesting Intel SGX enclaves used for Always Encrypted with secure enclaves in
Azure SQL Database, you need to:
1. Create an attestation provider and configure it with the recommended attestation policy.
2. Determine the attestation URL and share it with application administrators.

NOTE
Configuring attestation is the responsibility of the attestation administrator. See Roles and responsibilities when
configuring SGX enclaves and attestation.

Create and configure an attestation provider


An attestation provider is a resource in Azure Attestation that evaluates attestation requests against attestation
policies and issues attestation tokens.
Attestation policies are specified using the claim rule grammar.

IMPORTANT
An attestation provider gets created with the default policy for Intel SGX enclaves, which does not validate the code
running inside the enclave. Microsoft strongly advises you set the below recommended policy, and not use the default
policy, for Always Encrypted with secure enclaves.

Microsoft recommends the following policy for attesting Intel SGX enclaves used for Always Encrypted in Azure
SQL Database:

version= 1.0;
authorizationrules
{
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner",
value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit();
};

The above policy verifies:


The enclave inside Azure SQL Database doesn't support debugging.
Enclaves can be loaded with debugging disabled or enabled. Debugging support is designed to allow
developers to troubleshoot the code running in an enclave. In a production system, debugging could
enable an administrator to examine the content of the enclave, which would reduce the level of
protection the enclave provides. The recommended policy disables debugging to ensure that if a
malicious admin tries to turn on debugging support by taking over the enclave machine, attestation
will fail.

The product ID of the enclave matches the product ID assigned to Always Encrypted with secure enclaves.

Each enclave has a unique product ID that differentiates the enclave from other enclaves. The product
ID assigned to the Always Encrypted enclave is 4639.

The security version number (SVN) of the library is greater than 0.

The SVN allows Microsoft to respond to potential security bugs identified in the enclave code. In case
a security issue is dicovered and fixed, Microsoft will deploy a new version of the enclave with a new
(incremented) SVN. The above recommended policy will be updated to reflect the new SVN. By
updating your policy to match the recommended policy you can ensure that if a malicious
administrator tries to load an older and insecure enclave, attestation will fail.

The library in the enclave has been signed using the Microsoft signing key (the value of the x-ms-sgx-
mrsigner claim is the hash of the signing key).

One of the main goals of attestation is to convince clients that the binary running in the enclave is the
binary that is supposed to run. Attestation policies provide two mechanisms for this purpose. One is
the mrenclave claim which is the hash of the binary that is supposed to run in an enclave. The
problem with the mrenclave is that the binary hash changes even with trivial changes to the code,
which makes it hard to rev the code running in the enclave. Hence, we recommend the use of the
mrsigner , which is a hash of a key that is used to sign the enclave binary. When Microsoft revs the
enclave, the mrsigner stays the same as long as the signing key does not change. In this way, it
becomes feasible to deploy updated binaries without breaking customers' applications.

IMPORTANT
Microsoft may need to rotate the key used to sign the Always Encrypted enclave binary, which is expected to be a rare
event. Before a new version of the enclave binary, signed with a new key, is deployed to Azure SQL Database, this article
will be updated to provide a new recommended attestation policy and instructions on how you should update the policy
in your attestation providers to ensure your applications continue to work uninterrupted.

For instructions for how to create an attestation provider and configure with an attestation policy using:
Quickstart: Set up Azure Attestation with Azure portal

IMPORTANT
When you configure your attestation policy with Azure portal, set Attestation Type to SGX-IntelSDK .

Quickstart: Set up Azure Attestation with Azure PowerShell


IMPORTANT
When you configure your attestation policy with Azure PowerShell, set the Tee parameter to SgxEnclave .

Quickstart: Set up Azure Attestation with Azure CLI

IMPORTANT
When you configure your attestation policy with Azure CLI, set the attestation-type parameter to
SGX-IntelSDK .

Determine the attestation URL for your attestation policy


After you've configured an attestation policy, you need to share the attestation URL with administrators of
applications that use Always Encrypted with secure enclaves in Azure SQL Database. The attestation URL is the
Attest URI of the attestation provider containing the attestation policy, which looks like this:
https://MyAttestationProvider.wus.attest.azure.net .

Use Azure portal to determine the attestation URL


In the Overview pane for your attestation provider, copy the value of the Attest URI property to clipboard.
Use PowerShell to determine the attestation URL
Use the Get-AzAttestation cmdlet to retrieve the attestation provider properties, including AttestURI.

Get-AzAttestation -Name $attestationProviderName -ResourceGroupName $attestationResourceGroupName

For more information, see Create and manage an attestation provider.

Next Steps
Manage keys for Always Encrypted with secure enclaves

See also
Tutorial: Getting started with Always Encrypted with secure enclaves in Azure SQL Database
Auditing for Azure SQL Database and Azure
Synapse Analytics
7/12/2022 • 15 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


Auditing for Azure SQL Database and Azure Synapse Analytics tracks database events and writes them to an
audit log in your Azure storage account, Log Analytics workspace, or Event Hubs.
Auditing also:
Helps you maintain regulatory compliance, understand database activity, and gain insight into
discrepancies and anomalies that could indicate business concerns or suspected security violations.
Enables and facilitates adherence to compliance standards, although it doesn't guarantee compliance. For
more information, see the Microsoft Azure Trust Center where you can find the most current list of SQL
Database compliance certifications.

NOTE
For information on Azure SQL Managed Instance auditing, see the following article, Get started with SQL Managed
Instance auditing.

Overview
You can use SQL Database auditing to:
Retain an audit trail of selected events. You can define categories of database actions to be audited.
Repor t on database activity. You can use pre-configured reports and a dashboard to get started quickly with
activity and event reporting.
Analyze reports. You can find suspicious events, unusual activity, and trends.

IMPORTANT
Auditing for Azure SQL Database, Azure Synapse and Azure SQL Managed Instance is optimized for availability and
performance of the database(s) or instance(s) that are being audited. During periods of very high activity or high network
load, the auditing feature may allow transactions to proceed without recording all of the events marked for auditing.

Auditing limitations
Premium storage with BlockBlobStorage is supported.
Hierarchical namespace for all types of standard storage account and premium storage account
with BlockBlobStorage is supported.
Enabling auditing on a paused Azure Synapse is not supported. To enable auditing, resume Azure Synapse.
Auditing for Azure Synapse SQL pools supports default audit action groups only .
When you configure the auditing in Azure SQL Server or Azure SQL Database with log destination as the
storage account, the target storage account must be enabled with access to storage account keys. If the
storage account is configured to use Azure AD authentication only and not configured for access key usage,
the auditing cannot be configured.
Define server-level vs. database-level auditing policy
An auditing policy can be defined for a specific database or as a default server policy in Azure (which hosts SQL
Database or Azure Synapse):
A server policy applies to all existing and newly created databases on the server.
If server auditing is enabled, it always applies to the database. The database will be audited, regardless of
the database auditing settings.
When auditing policy is defined at the database-level to a Log Analytics workspace or an Event Hub
destination, the following operations will not keep the source database-level auditing policy:
Database copy
Point-in-time restore
Geo-replication (Secondary database will not have database-level auditing)
Enabling auditing on the database, in addition to enabling it on the server, does not override or change
any of the settings of the server auditing. Both audits will exist side by side. In other words, the database
is audited twice in parallel; once by the server policy and once by the database policy.

NOTE
You should avoid enabling both server auditing and database blob auditing together, unless:
You want to use a different storage account, retention period or Log Analytics Workspace for a specific
database.
You want to audit event types or categories for a specific database that differ from the rest of the databases on
the server. For example, you might have table inserts that need to be audited only for a specific database.
Otherwise, we recommended that you enable only server-level auditing and leave the database-level auditing
disabled for all databases.

Remarks
Audit logs are written to Append Blobs in an Azure Blob storage on your Azure subscription
Audit logs are in .xel format and can be opened by using SQL Server Management Studio (SSMS).
To configure an immutable log store for the server or database-level audit events, follow the instructions
provided by Azure Storage. Make sure you have selected Allow additional appends when you configure
the immutable blob storage.
You can write audit logs to a an Azure Storage account behind a VNet or firewall. For specific instructions see,
Write audit to a storage account behind VNet and firewall.
For details about the log format, hierarchy of the storage folder and naming conventions, see the Blob Audit
Log Format Reference.
Auditing on Read-Only Replicas is automatically enabled. For further details about the hierarchy of the
storage folders, naming conventions, and log format, see the SQL Database Audit Log Format.
When using Azure AD Authentication, failed logins records will not appear in the SQL audit log. To view failed
login audit records, you need to visit the Azure Active Directory portal, which logs details of these events.
Logins are routed by the gateway to the specific instance where the database is located. In the case of AAD
logins, the credentials are verified before attempting to use that user to login into the requested database. In
the case of failure, the requested database is never accessed, so no auditing occurs. In the case of SQL logins,
the credentials are verified on the requested data, so in this case they can be audited. Successful logins, which
obviously reach the database, are audited in both cases.
After you've configured your auditing settings, you can turn on the new threat detection feature and
configure emails to receive security alerts. When you use threat detection, you receive proactive alerts on
anomalous database activities that can indicate potential security threats. For more information, see Getting
started with threat detection.
After a database with auditing enabled is copied to another Azure SQL logical server, you may receive an
email notifying you that the audit failed. This is a known issue and auditing should work as expected on the
newly copied database.

Set up auditing for your server


The default auditing policy includes all actions and the following set of action groups, which will audit all the
queries and stored procedures executed against the database, as well as successful and failed logins:
BATCH_COMPLETED_GROUP
SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP
FAILED_DATABASE_AUTHENTICATION_GROUP
You can configure auditing for different types of actions and action groups using PowerShell, as described in the
Manage SQL Database auditing using Azure PowerShell section.
Azure SQL Database and Azure Synapse Audit stores 4000 characters of data for character fields in an audit
record. When the statement or the data_sensitivity_information values returned from an auditable action
contain more than 4000 characters, any data beyond the first 4000 characters will be truncated and not
audited . The following section describes the configuration of auditing using the Azure portal.

NOTE
Enabling auditing on a paused dedicated SQL pool is not possible. To enable auditing, un-pause the dedicated SQL
pool. Learn more about dedicated SQL pool.
When auditing is configured to a Log Analytics workspace or to an Event Hub destination via the Azure portal or
PowerShell cmdlet, a Diagnostic Setting is created with "SQLSecurityAuditEvents" category enabled.

1. Go to the Azure portal.


2. Navigate to Auditing under the Security heading in your SQL database or SQL ser ver pane.
3. If you prefer to set up a server auditing policy, you can select the View ser ver settings link on the
database auditing page. You can then view or modify the server auditing settings. Server auditing policies
apply to all existing and newly created databases on this server.
4. If you prefer to enable auditing on the database level, switch Auditing to ON . If server auditing is
enabled, the database-configured audit will exist side-by-side with the server audit.
5. You have multiple options for configuring where audit logs will be written. You can write logs to an Azure
storage account, to a Log Analytics workspace for consumption by Azure Monitor logs, or to event hub
for consumption using event hub. You can configure any combination of these options, and audit logs will
be written to each.

Auditing of Microsoft Support operations


Auditing of Microsoft Support operations for Azure SQL Server allows you to audit Microsoft support
engineers' operations when they need to access your server during a support request. The use of this capability,
along with your auditing, enables more transparency into your workforce and allows for anomaly detection,
trend visualization, and data loss prevention.
To enable auditing of Microsoft Support operations navigate to Auditing under the Security heading in your
Azure SQL ser ver pane, and switch Enable Auditing of Microsoft suppor t operations to ON .
To review the audit logs of Microsoft Support operations in your Log Analytics workspace, use the following
query:

AzureDiagnostics
| where Category == "DevOpsOperationsAudit"

You have the option of choosing a different storage destination for this auditing log, or use the same auditing
configuration for your server.
Audit to storage destination
To configure writing audit logs to a storage account, select Storage when you get to the Auditing section.
Select the Azure storage account where logs will be saved, and then select the retention period by opening
Advanced proper ties . Then click Save . Logs older than the retention period are deleted.

NOTE
If you are deploying from the Azure portal, be sure that the storage account is in the same region as your database and
server. If you are deploying through other methods, the storage account can be in any region.

The default value for retention period is 0 (unlimited retention). You can change this value by moving the
Retention (Days) slider in Advanced proper ties when configuring the storage account for auditing.
If you change retention period from 0 (unlimited retention) to any other value, please note that
retention will only apply to logs written after retention value was changed (logs written during the
period when retention was set to unlimited are preserved, even after retention is enabled).
Audit to Log Analytics destination
To configure writing audit logs to a Log Analytics workspace, select Log Analytics and open Log Analytics
details . Select the Log Analytics workspace where logs will be written and then click OK . If you have not created
a Log Analytics workspace, see Create a Log Analytics workspace in the Azure portal.
For more details about Azure Monitor Log Analytics workspace, see Designing your Azure Monitor Logs
deployment
Audit to Event Hub destination
To configure writing audit logs to an event hub, select Event Hub . Select the event hub where logs will be
written and then click Save . Be sure that the event hub is in the same region as your database and server.
Analyze audit logs and reports
If you chose to write audit logs to Log Analytics:
Use the Azure portal. Open the relevant database. At the top of the database's Auditing page, select
View audit logs .

Then, you have two ways to view the logs:


Clicking on Log Analytics at the top of the Audit records page will open the Logs view in Log Analytics
workspace, where you can customize the time range and the search query.
Clicking View dashboard at the top of the Audit records page will open a dashboard displaying audit
logs info, where you can drill down into Security Insights, Access to Sensitive Data and more. This
dashboard is designed to help you gain security insights for your data. You can also customize the time
range and search query.

Alternatively, you can also access the audit logs from Log Analytics blade. Open your Log Analytics
workspace and under General section, click Logs . You can start with a simple query, such as: search
"SQLSecurityAuditEvents" to view the audit logs. From here, you can also use Azure Monitor logs to run
advanced searches on your audit log data. Azure Monitor logs gives you real-time operational insights
using integrated search and custom dashboards to readily analyze millions of records across all your
workloads and servers. For additional useful information about Azure Monitor logs search language and
commands, see Azure Monitor logs search reference.
If you chose to write audit logs to Event Hub:
To consume audit logs data from Event Hub, you will need to set up a stream to consume events and write
them to a target. For more information, see Azure Event Hubs Documentation.
Audit logs in Event Hub are captured in the body of Apache Avro events and stored using JSON formatting
with UTF-8 encoding. To read the audit logs, you can use Avro Tools or similar tools that process this format.
If you chose to write audit logs to an Azure storage account, there are several methods you can use to view the
logs:
Audit logs are aggregated in the account you chose during setup. You can explore audit logs by using a
tool such as Azure Storage Explorer. In Azure storage, auditing logs are saved as a collection of blob files
within a container named sqldbauditlogs . For further details about the hierarchy of the storage folders,
naming conventions, and log format, see the SQL Database Audit Log Format.
Use the Azure portal. Open the relevant database. At the top of the database's Auditing page, click View
audit logs .

Audit records opens, from which you'll be able to view the logs.
You can view specific dates by clicking Filter at the top of the Audit records page.
You can switch between audit records that were created by the server audit policy and the
database audit policy by toggling Audit Source .

Use the system function sys.fn_get_audit_file (T-SQL) to return the audit log data in tabular format. For
more information on using this function, see sys.fn_get_audit_file.
Use Merge Audit Files in SQL Server Management Studio (starting with SSMS 17):
1. From the SSMS menu, select File > Open > Merge Audit Files .
2. The Add Audit Files dialog box opens. Select one of the Add options to choose whether to
merge audit files from a local disk or import them from Azure Storage. You are required to provide
your Azure Storage details and account key.
3. After all files to merge have been added, click OK to complete the merge operation.
4. The merged file opens in SSMS, where you can view and analyze it, as well as export it to an XEL or
CSV file, or to a table.
Use Power BI. You can view and analyze audit log data in Power BI. For more information and to access a
downloadable template, see Analyze audit log data in Power BI.
Download log files from your Azure Storage blob container via the portal or by using a tool such as Azure
Storage Explorer.
After you have downloaded a log file locally, double-click the file to open, view, and analyze the logs in
SSMS.
You can also download multiple files simultaneously via Azure Storage Explorer. To do so, right-click a
specific subfolder and select Save as to save in a local folder.
Additional methods:
After downloading several files or a subfolder that contains log files, you can merge them locally as
described in the SSMS Merge Audit Files instructions described previously.
View blob auditing logs programmatically: Query Extended Events Files by using PowerShell.

Production practices
Auditing geo -replicated databases
With geo-replicated databases, when you enable auditing on the primary database the secondary database will
have an identical auditing policy. It is also possible to set up auditing on the secondary database by enabling
auditing on the secondar y ser ver , independently from the primary database.
Server-level (recommended ): Turn on auditing on both the primar y ser ver as well as the secondar y
ser ver - the primary and secondary databases will each be audited independently based on their respective
server-level policy.
Database-level: Database-level auditing for secondary databases can only be configured from Primary
database auditing settings.
Auditing must be enabled on the primary database itself, not the server.
After auditing is enabled on the primary database, it will also become enabled on the secondary
database.
IMPORTANT
With database-level auditing, the storage settings for the secondary database will be identical to those of
the primary database, causing cross-regional traffic. We recommend that you enable only server-level
auditing, and leave the database-level auditing disabled for all databases.

Storage key regeneration


In production, you are likely to refresh your storage keys periodically. When writing audit logs to Azure storage,
you need to resave your auditing policy when refreshing your keys. The process is as follows:
1. Open Advanced proper ties under Storage . In the Storage Access Key box, select Secondar y . Then
click Save at the top of the auditing configuration page.

2. Go to the storage configuration page and regenerate the primary access key.
3. Go back to the auditing configuration page, switch the storage access key from secondary to primary,
and then click OK . Then click Save at the top of the auditing configuration page.
4. Go back to the storage configuration page and regenerate the secondary access key (in preparation for
the next key's refresh cycle).

Manage Azure SQL Database auditing


Using Azure PowerShell
PowerShell cmdlets (including WHERE clause suppor t for additional filtering) :
Create or Update Database Auditing Policy (Set-AzSqlDatabaseAudit)
Create or Update Server Auditing Policy (Set-AzSqlServerAudit)
Get Database Auditing Policy (Get-AzSqlDatabaseAudit)
Get Server Auditing Policy (Get-AzSqlServerAudit)
Remove Database Auditing Policy (Remove-AzSqlDatabaseAudit)
Remove Server Auditing Policy (Remove-AzSqlServerAudit)
For a script example, see Configure auditing and threat detection using PowerShell.
Using REST API
REST API :
Create or Update Database Auditing Policy
Create or Update Server Auditing Policy
Get Database Auditing Policy
Get Server Auditing Policy
Extended policy with WHERE clause support for additional filtering:
Create or Update Database Extended Auditing Policy
Create or Update Server Extended Auditing Policy
Get Database Extended Auditing Policy
Get Server Extended Auditing Policy
Using Azure CLI
Manage a server's auditing policy
Manage a database's auditing policy
Using Azure Resource Manager templates
You can manage Azure SQL Database auditing using Azure Resource Manager templates, as shown in these
examples:
Deploy an Azure SQL Database with Auditing enabled to write audit logs to Azure Blob storage account
Deploy an Azure SQL Database with Auditing enabled to write audit logs to Log Analytics
Deploy an Azure SQL Database with Auditing enabled to write audit logs to Event Hubs

NOTE
The linked samples are on an external public repository and are provided 'as is', without warranty, and are not supported
under any Microsoft support program/service.

See also
Data Exposed episode What's New in Azure SQL Auditing on Channel 9.
Auditing for SQL Managed Instance
Auditing for SQL Server
SQL Database audit log format
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance Azure Synapse Analytics
Azure SQL Database auditing tracks database events and writes them to an audit log in your Azure storage
account, or sends them to Event Hub or Log Analytics for downstream processing and analysis.

Naming conventions
Blob audit
Audit logs stored in Azure Blob storage are stored in a container named sqldbauditlogs in the Azure storage
account. The directory hierarchy within the container is of the form
<ServerName>/<DatabaseName>/<AuditName>/<Date>/ . The Blob file name format is
<CreationTime>_<FileNumberInSession>.xel , where CreationTime is in UTC hh_mm_ss_ms format, and
FileNumberInSession is a running index in case session logs spans across multiple Blob files.

For example, for database Database1 on Server1 the following is a possible valid path:
Server1/Database1/SqlDbAuditing_ServerAudit_NoRetention/2019-02-03/12_23_30_794_0.xel

Read-only Replicas audit logs are stored in the same container. The directory hierarchy within the container is of
the form <ServerName>/<DatabaseName>/<AuditName>/<Date>/RO/ . The Blob file name shares the same format. The
Audit Logs of Read-only Replicas are stored in the same container.
Event Hub
Audit events are written to the namespace and event hub that was defined during auditing configuration, and
are captured in the body of Apache Avro events and stored using JSON formatting with UTF-8 encoding. To read
the audit logs, you can use Avro Tools or similar tools that process this format.
Log Analytics
Audit events are written to Log Analytics workspace defined during auditing configuration, to the
AzureDiagnostics table with the category SQLSecurityAuditEvents . For additional useful information about Log
Analytics search language and commands, see Log Analytics search reference.

Audit log fields


N A M E ( EVEN T
H UB S/ LO G EVEN T H UB S/ LO G
N A M E ( B LO B ) A N A LY T IC S) DESC RIP T IO N B LO B T Y P E A N A LY T IC S T Y P E

action_id action_id_s ID of the action varchar(4) string

action_name action_name_s Name of the action N/A string

additional_informatio additional_informatio Any additional nvarchar(4000) string


n n_s information about
the event, stored as
XML
N A M E ( EVEN T
H UB S/ LO G EVEN T H UB S/ LO G
N A M E ( B LO B ) A N A LY T IC S) DESC RIP T IO N B LO B T Y P E A N A LY T IC S T Y P E

affected_rows affected_rows_d Number of rows bigint int


affected by the query

application_name application_name_s Name of client nvarchar(128) string


application

audit_schema_versio audit_schema_versio Always 1 int int


n n_d

class_type class_type_s Type of auditable varchar(2) string


entity that the audit
occurs on

class_type_desc class_type_descriptio Description of N/A string


n_s auditable entity that
the audit occurs on

client_ip client_ip_s Source IP of the nvarchar(128) string


client application

connection_id N/A ID of the connection GUID N/A


in the server

data_sensitivity_infor data_sensitivity_infor Information types nvarchar(4000) string


mation mation_s and sensitivity labels
returned by the
audited query, based
on the classified
columns in the
database. Learn more
about Azure SQL
Database data
discover and
classification

database_name database_name_s The database context sysname string


in which the action
occurred

database_principal_id database_principal_id ID of the database int int


_d user context that the
action is performed
in

database_principal_n database_principal_n Name of the sysname string


ame ame_s database user
context in which the
action is performed

duration_milliseconds duration_milliseconds Query execution bigint int


_d duration in
milliseconds
N A M E ( EVEN T
H UB S/ LO G EVEN T H UB S/ LO G
N A M E ( B LO B ) A N A LY T IC S) DESC RIP T IO N B LO B T Y P E A N A LY T IC S T Y P E

event_time event_time_t Date and time when datetime2 datetime


the auditable action
is fired

host_name N/A Client host name string N/A

is_column_permission is_column_permission Flag indicating if this bit string


_s is a column level
permission. 1 = true,
0 = false

N/A is_server_level_audit_ Flag indicating if this N/A string


s audit is at the server
level

object_ id object_id_d The ID of the entity int int


on which the audit
occurred. This
includes the : server
objects, databases,
database objects, and
schema objects. 0 if
the entity is the
server itself or if the
audit is not
performed at an
object level

object_name object_name_s The name of the sysname string


entity on which the
audit occurred. This
includes the : server
objects, databases,
database objects, and
schema objects. 0 if
the entity is the
server itself or if the
audit is not
performed at an
object level

permission_bitmask permission_bitmask_s When applicable, varbinary(16) string


shows the
permissions that
were granted, denied,
or revoked

response_rows response_rows_d Number of rows bigint int


returned in the result
set

schema_name schema_name_s The schema context sysname string


in which the action
occurred. NULL for
audits occurring
outside a schema
N A M E ( EVEN T
H UB S/ LO G EVEN T H UB S/ LO G
N A M E ( B LO B ) A N A LY T IC S) DESC RIP T IO N B LO B T Y P E A N A LY T IC S T Y P E

N/A securable_class_type_ Securable object that N/A string


s maps to the
class_type being
audited

sequence_group_id sequence_group_id_g Unique identifier varbinary GUID

sequence_number sequence_number_d Tracks the sequence int int


of records within a
single audit record
that was too large to
fit in the write buffer
for audits

server_instance_nam server_instance_nam Name of the server sysname string


e e_s instance where the
audit occurred

server_principal_id server_principal_id_d ID of the login int int


context in which the
action is performed

server_principal_nam server_principal_nam Current login sysname string


e e_s

server_principal_sid server_principal_sid_s Current login SID varbinary string

session_id session_id_d ID of the session on smallint int


which the event
occurred

session_server_princi session_server_princi Server principal for sysname string


pal_name pal_name_s session

statement statement_s T-SQL statement nvarchar(4000) string


that was executed (if
any)

succeeded succeeded_s Indicates whether the bit string


action that triggered
the event succeeded.
For events other
than login and batch,
this only reports
whether the
permission check
succeeded or failed,
not the operation. 1
= success, 0 = fail
N A M E ( EVEN T
H UB S/ LO G EVEN T H UB S/ LO G
N A M E ( B LO B ) A N A LY T IC S) DESC RIP T IO N B LO B T Y P E A N A LY T IC S T Y P E

target_database_prin target_database_prin The database int int


cipal_id cipal_id_d principal the
GRANT/DENY/REVOK
E operation is
performed on. 0 if
not applicable

target_database_prin target_database_prin Target user of action. string string


cipal_name cipal_name_s NULL if not
applicable

target_server_princip target_server_princip Server principal that int int


al_id al_id_d the
GRANT/DENY/REVOK
E operation is
performed on.
Returns 0 if not
applicable

target_server_princip target_server_princip Target login of action. sysname string


al_name al_name_s NULL if not
applicable

target_server_princip target_server_princip SID of target login. varbinary string


al_sid al_sid_s NULL if not
applicable

transaction_id transaction_id_d SQL Server only bigint int


(starting with 2016) -
0 for Azure SQL
Database

user_defined_event_i user_defined_event_i User defined event smallint int


d d_d ID passed as an
argument to
sp_audit_write. NULL
for system events
(default) and non-
zero for user-defined
event. For more
information, see
sp_audit_write
(Transact-SQL)

user_defined_informa user_defined_informa User defined nvarchar(4000) string


tion tion_s information passed
as an argument to
sp_audit_write. NULL
for system events
(default) and non-
zero for user-defined
event. For more
information, see
sp_audit_write
(Transact-SQL)
Next steps
Learn more about Azure SQL Database auditing.
DNS alias for Azure SQL Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


Azure SQL Database has a Domain Name System (DNS) server. PowerShell and REST APIs accept calls to create
and manage DNS aliases for your logical SQL server name.
A DNS alias can be used in place of the server name. Client programs can use the alias in their connection
strings. The DNS alias provides a translation layer that can redirect your client programs to different servers.
This layer spares you the difficulties of having to find and edit all the clients and their connection strings.

NOTE
In Azure Synapse Analytics, the Azure SQL logical server DNS alias is only supported for dedicated SQL Pool (formerly
DW). For dedicated SQL pools in Azure Synapse workspaces, the DNS alias is not currently supported.

Common uses for a DNS alias include the following cases:


Create an easy to remember name for a server.
During initial development, your alias can refer to a test server. When the application goes live, you can
modify the alias to refer to the production server. The transition from test to production does not require any
modification to the configurations several clients that connect to the server.
Suppose the only database in your application is moved to another server. You can modify the alias without
having to modify the configurations of several clients.
During a regional outage you use geo-restore to recover your database in a different server and region. You
can modify your existing alias to point to the new server so that the existing client application could re-
connect to it.

Domain Name System (DNS) of the Internet


The Internet relies on the DNS. The DNS translates your friendly names into the name of your server.

Scenarios with one DNS alias


Suppose you need to switch your system to a new server. In the past you needed to find and update every
connection string in every client program. But now, if the connection strings use a DNS alias, only an alias
property must be updated.
The DNS alias feature of Azure SQL Database can help in the following scenarios:
Test to production
When you start developing the client programs, have them use a DNS alias in their connection strings. You
make the properties of the alias point to a test version of your server.
Later when the new system goes live in production, you can update the properties of the alias to point to the
production server. No change to the client programs is necessary.
Cross-region support
A disaster recovery might shift your server to a different geographic region. For a system that was using a DNS
alias, the need to find and update all the connection strings for all clients can be avoided. Instead, you can
update an alias to refer to the new server that now hosts your Azure SQL Database.

Properties of a DNS alias


The following properties apply to each DNS alias for your server:
Unique name: Each alias name you create is unique across all servers, just as server names are.
Server is required: A DNS alias cannot be created unless it references exactly one server, and the server must
already exist. An updated alias must always reference exactly one existing server.
When you drop a server, the Azure system also drops all DNS aliases that refer to the server.
Not bound to any region: DNS aliases are not bound to a region. Any DNS aliases can be updated to refer to
a server that resides in any geographic region.
However, when updating an alias to refer to another server, both servers must exist in the same Azure
subscription.
Permissions: To manage a DNS alias, the user must have Server Contributor permissions, or higher. For more
information, see Get started with Azure role-based access control in the Azure portal.

Manage your DNS aliases


Both REST APIs and PowerShell cmdlets are available to enable you to programmatically manage your DNS
aliases.
REST APIs for managing your DNS aliases
The documentation for the REST APIs is available near the following web location:
Azure SQL Database REST API
Also, the REST APIs can be seen in GitHub at:
Azure SQL Database DNS alias REST APIs

PowerShell for managing your DNS aliases

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.

PowerShell cmdlets are available that call the REST APIs.


A code example of PowerShell cmdlets being used to manage DNS aliases is documented at:
PowerShell for DNS Alias to Azure SQL Database
The cmdlets used in the code example are the following:
New-AzSqlServerDnsAlias: Creates a new DNS alias in the Azure SQL Database service system. The alias
refers to server 1.
Get-AzSqlServerDnsAlias: Get and list all the DNS aliases that are assigned to server 1.
Set-AzSqlServerDnsAlias: Modifies the server name that the alias is configured to refer to, from server 1 to
server 2.
Remove-AzSqlServerDnsAlias: Remove the DNS alias from server 2, by using the name of the alias.

Limitations
Presently, a DNS alias has the following limitations:
Delay of up to 2 minutes: It takes up to 2 minutes for a DNS alias to be updated or removed.
Regardless of any brief delay, the alias immediately stops referring client connections to the legacy
server.
DNS lookup: For now, the only authoritative way to check what server a given DNS alias refers to is by
performing a DNS lookup.
Table auditing is not supported: You cannot use a DNS alias on a server that has table auditing enabled on a
database.
Table auditing is deprecated.
We recommend that you move to Blob Auditing.
We recommend that you move to Blob Auditing.
DNS alias is subject to naming restrictions.

Related resources
Overview of business continuity with Azure SQL Database, including disaster recovery.
Azure REST API reference
Server Dns Aliases API

Next steps
PowerShell for DNS Alias to Azure SQL Database
Azure SQL Database and Azure Synapse Analytics
network access controls
7/12/2022 • 6 minutes to read • Edit Online

When you create a logical SQL server from the Azure portal for Azure SQL Database and Azure Synapse
Analytics, the result is a public endpoint in the format, yourservername.database.windows.net.
You can use the following network access controls to selectively allow access to a database via the public
endpoint:
Allow Azure Services: When set to ON, other resources within the Azure boundary, for example an Azure
Virtual Machine, can access SQL Database
IP firewall rules: Use this feature to explicitly allow connections from a specific IP address, for example from
on-premises machines
You can also allow private access to the database from virtual networks via:
Virtual network firewall rules: Use this feature to allow traffic from a specific virtual network within the Azure
boundary
Private Link: Use this feature to create a private endpoint for logical SQL server within a specific virtual
network

IMPORTANT
This article does not apply to SQL Managed Instance . For more information about the networking configuration, see
connecting to Azure SQL Managed Instance .

See the below video for a high-level explanation of these access controls and what they do:

Allow Azure services


By default during creation of a new logical SQL server from the Azure portal, this setting is set to OFF . This
setting appears when connectivity is allowed using public service endpoint.
You can also change this setting via the firewall pane after the logical SQL server is created as follows.

When set to ON , your server allows communications from all resources inside the Azure boundary, that may or
may not be part of your subscription.
In many cases, the ON setting is more permissive than what most customers want. You may want to set this
setting to OFF and replace it with more restrictive IP firewall rules or virtual network firewall rules.
However, doing so affects the following features that run on virtual machines in Azure that aren't part of your
virtual network and hence connect to the database via an Azure IP address:
Import Export Service
Import Export Service doesn't work when Allow access to Azure ser vices is set to OFF . However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DACFx API.
Data Sync
To use the Data sync feature with Allow access to Azure ser vices set to OFF , you need to create individual
firewall rule entries to add IP addresses from the Sql ser vice tag for the region hosting the Hub database. Add
these server-level firewall rules to the servers hosting both Hub and Member databases (which may be in
different regions)
Use the following PowerShell script to generate IP addresses corresponding to the SQL service tag for West US
region

PS C:\> $serviceTags = Get-AzNetworkServiceTag -Location eastus2


PS C:\> $sql = $serviceTags.Values | Where-Object { $_.Name -eq "Sql.WestUS" }
PS C:\> $sql.Properties.AddressPrefixes.Count
70
PS C:\> $sql.Properties.AddressPrefixes
13.86.216.0/25
13.86.216.128/26
13.86.216.192/27
13.86.217.0/25
13.86.217.128/26
13.86.217.192/27

TIP
Get-AzNetworkServiceTag returns the global range for SQL Service Tag despite specifying the Location parameter. Be sure
to filter it to the region that hosts the Hub database used by your sync group

Note that the output of the PowerShell script is in Classless Inter-Domain Routing (CIDR) notation. This needs to
be converted to a format of Start and End IP address using Get-IPrangeStartEnd.ps1 like this:

PS C:\> Get-IPrangeStartEnd -ip 52.229.17.93 -cidr 26


start end
----- ---
52.229.17.64 52.229.17.127

You can use this additional PowerShell script to convert all the IP addresses from CIDR to Start and End IP
address format.
PS C:\>foreach( $i in $sql.Properties.AddressPrefixes) {$ip,$cidr= $i.split('/') ; Get-IPrangeStartEnd -ip
$ip -cidr $cidr;}
start end
----- ---
13.86.216.0 13.86.216.127
13.86.216.128 13.86.216.191
13.86.216.192 13.86.216.223

You can now add these as distinct firewall rules and then set Allow Azure ser vices to access ser ver to OFF.

IP firewall rules
Ip based firewall is a feature of the logical SQL server in Azure that prevents all access to your server until you
explicitly add IP addresses of the client machines.

Virtual network firewall rules


In addition to IP rules, the server firewall allows you to define virtual network rules.
To learn more, see Virtual network service endpoints and rules for Azure SQL Database or watch this video:

Azure Networking terminology


Be aware of the following Azure Networking terms as you explore Virtual network firewall rules
Vir tual network : You can have virtual networks associated with your Azure subscription
Subnet: A virtual network contains subnets . Any Azure virtual machines (VMs) that you have are assigned to
subnets. One subnet can contain multiple VMs or other compute nodes. Compute nodes that are outside of your
virtual network can't access your virtual network unless you configure your security to allow access.
Vir tual network ser vice endpoint: A Virtual network service endpoint is a subnet whose property values
include one or more formal Azure service type names. In this article we're interested in the type name of
Microsoft.Sql , which refers to the Azure service named SQL Database.
Vir tual network rule: A virtual network rule for your server is a subnet that is listed in the access control list
(ACL) of your server. To be in the ACL for your database in SQL Database, the subnet must contain the
Microsoft.Sql type name. A virtual network rule tells your server to accept communications from every node
that is on the subnet.

IP vs. Virtual network firewall rules


The Azure SQL Database firewall allows you to specify IP address ranges from which communications are
accepted into SQL Database. This approach is fine for stable IP addresses that are outside the Azure private
network. However, virtual machines (VMs) within the Azure private network are configured with dynamic IP
addresses. Dynamic IP addresses can change when your VM is restarted and in turn invalidate the IP-based
firewall rule. It would be folly to specify a dynamic IP address in a firewall rule, in a production environment.
You can work around this limitation by obtaining a static IP address for your VM. For details, see Create a virtual
machine with a static public IP address using the Azure portal. However, the static IP approach can become
difficult to manage, and it's costly when done at scale.
Virtual network rules are easier alternative to establish and to manage access from a specific subnet that
contains your VMs.
NOTE
You cannot yet have SQL Database on a subnet. If your server was a node on a subnet in your virtual network, all nodes
within the virtual network could communicate with your SQL Database. In this case, your VMs could communicate with
SQL Database without needing any virtual network rules or IP rules.

Private Link
Private Link allows you to connect to a server via a private endpoint . A private endpoint is a private IP address
within a specific virtual network and Subnet.

Next steps
For a quickstart on creating a server-level IP firewall rule, see Create a database in SQL Database.
For a quickstart on creating a server-level virtual network firewall rule, see Virtual Network service
endpoints and rules for Azure SQL Database.
For help with connecting to a database in SQL Database from open source or third-party applications, see
Client quickstart code samples to SQL Database.
For information on additional ports that you may need to open, see the SQL Database: Outside vs
inside section of Ports beyond 1433 for ADO.NET 4.5 and SQL Database
For an overview of Azure SQL Database Connectivity, see Azure SQL Connectivity Architecture
For an overview of Azure SQL Database security, see Securing your database
Outbound firewall rules for Azure SQL Database
and Azure Synapse Analytics
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
Outbound firewall rules limit network traffic from the Azure SQL logical server to a customer defined list of
Azure Storage accounts and Azure SQL logical servers. Any attempt to access storage accounts or databases not
in this list is denied. The following Azure SQL Database features support this feature:
Auditing
Vulnerability assessment
Import/Export service
OPENROWSET
Bulk Insert
Elastic query

IMPORTANT
This article applies to both Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse
Analytics. These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated
with the server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse
Analytics. Likewise, any references to 'server' is referring to the logical SQL server that hosts Azure SQL Database and
dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. This article does not apply to Azure SQL Managed
Instance or dedicated SQL pools in Azure Synapse Analytics workspaces.
Outbound firewall rules are defined at the logical server. Geo-replication and Auto-failover groups require the same
set of rules to be defined on the primary and all secondaries.

Set outbound firewall rules in the Azure portal


1. Browse to the Outbound networking section in the Firewalls and vir tual networks blade for your
Azure SQL Database and select Configure outbound networking restrictions .

This will open up the following blade on the right-hand side:


2. Select the check box titled Restrict outbound networking and then add the FQDN for the Storage
accounts (or SQL Databases) using the Add domain button.

3. After you're done, you should see a screen similar to the one below. Select OK to apply these settings.

Set outbound firewall rules using PowerShell


IMPORTANT
Azure SQL Database still supports the PowerShell Azure Resource Manager module, but all future development is for the
Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.

The following PowerShell script shows how to change the outbound networking setting (using the
RestrictOutboundNetworkAccess property):

# Get current settings for Outbound Networking


(Get-AzSqlServer -ServerName <SqlServerName> -ResourceGroupName
<ResourceGroupName>).RestrictOutboundNetworkAccess

# Update setting for Outbound Networking


$SecureString = ConvertTo-SecureString "<ServerAdminPassword>" -AsPlainText -Force

Set-AzSqlServer -ServerName <SqlServerName> -ResourceGroupName <ResourceGroupName> -SqlAdministratorPassword


$SecureString -RestrictOutboundNetworkAccess "Enabled"

Use these PowerShell cmdlets to configure outbound firewall rules

# List all Outbound Firewall Rules


Get-AzSqlServerOutboundFirewallRule -ServerName <SqlServerName> -ResourceGroupName <ResourceGroupName>

# Add an Outbound Firewall Rule


New-AzSqlServerOutboundFirewallRule -ServerName <SqlServerName> -ResourceGroupName <ResourceGroupName> -
AllowedFQDN testOBFR1

# List a specific Outbound Firewall Rule


Get-AzSqlServerOutboundFirewallRule -ServerName <SqlServerName> -ResourceGroupName <ResourceGroupName> -
AllowedFQDN <StorageAccountFQDN>

#Delete an Outbound Firewall Rule


Remove-AzSqlServerOutboundFirewallRule -ServerName <SqlServerName> -ResourceGroupName <ResourceGroupName> -
AllowedFQDN <StorageAccountFQDN>

Set outbound firewall rules using the Azure CLI


IMPORTANT
All scripts in this section require the Azure CLI.

Azure CLI in a bash shell


The following CLI script shows how to change the outbound networking setting (using the
RestrictOutboundNetworkAccess property) in a bash shell:

# Get current setting for Outbound Networking


az sql server show -n sql-server-name -g sql-server-group --query "RestrictOutboundNetworkAccess"

# Update setting for Outbound Networking


az sql server update -n sql-server-name -g sql-server-group --set RestrictOutboundNetworkAccess="Enabled"

Use these CLI commands to configure outbound firewall rules


# List a server's outbound firewall rules.
az sql server outbound-firewall-rule list -g sql-server-group -s sql-server-name

# Create a new outbound firewall rule


az sql server outbound-firewall-rule create -g sql-server-group -s sql-server-name --outbound-rule-fqdn
allowedFQDN

# Show the details for an outbound firewall rule.


az sql server outbound-firewall-rule show -g sql-server-group -s sql-server-name --outbound-rule-fqdn
allowedFQDN

# Delete the outbound firewall rule.


az sql server outbound-firewall-rule delete -g sql-server-group -s sql-server-name --outbound-rule-fqdn
allowedFQDN

Next steps
For an overview of Azure SQL Database security, see Securing your database.
For an overview of Azure SQL Database connectivity, see Azure SQL Connectivity Architecture.
Learn more about Azure SQL Database and Azure Synapse Analytics network access controls.
Learn about Azure Private Link for Azure SQL Database and Azure Synapse Analytics.
Azure Private Link for Azure SQL Database and
Azure Synapse Analytics
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW)
only)
Private Link allows you to connect to various PaaS services in Azure via a private endpoint . For a list of PaaS
services that support Private Link functionality, go to the Private Link Documentation page. A private endpoint is
a private IP address within a specific VNet and subnet.

IMPORTANT
This article applies to both Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics.
These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated with the
server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse Analytics.
Likewise, any references to 'server' is referring to the logical server that hosts Azure SQL Database and dedicated SQL
pool (formerly SQL DW) in Azure Synapse Analytics. This article does not apply to Azure SQL Managed Instance or
dedicated SQL pools in Azure Synapse Analytics workspaces.

How to set up Private Link


Creation Process
Private Endpoints can be created using the Azure portal, PowerShell, or the Azure CLI:
The portal
PowerShell
CLI
Approval process
Once the network admin creates the Private Endpoint (PE), the SQL admin can manage the Private Endpoint
Connection (PEC) to SQL Database.
1. Navigate to the server resource in the Azure portal as per steps shown in the screenshot below
(1) Select the Private endpoint connections in the left pane
(2) Shows a list of all Private Endpoint Connections (PECs)
(3) Corresponding Private Endpoint (PE) created
2. Select an individual PEC from the list by selecting it.

3. The SQL admin can choose to approve or reject a PEC and optionally add a short text response.

4. After approval or rejection, the list will reflect the appropriate state along with the response text.

5. Finally clicking on the private endpoint name


leads to the Network Interface details

which finally leads to the IP address for the private endpoint

IMPORTANT
When you add a private endpoint connection, public routing to your Azure SQL logical server isn't blocked by default. In
the Firewall and vir tual networks pane, the setting Deny public network access is not selected by default. To
disable public network access, ensure that you select Deny public network access .

Disable public access to your Azure SQL logical server


For this scenario, assume you want to disable all public access to your Azure SQL logical server and allow
connections only from your virtual network.
First, ensure that your private endpoint connections are enabled and configured. Then, to disable public access
to your logical server:
1. Go to the Firewalls and vir tual network pane of your Azure SQL logical server.
2. Select the Deny public network access checkbox.

Test connectivity to SQL Database from an Azure VM in same virtual


network
For this scenario, assume you've created an Azure Virtual Machine (VM) running a recent version of Windows in
the same virtual network as the private endpoint.
1. Start a Remote Desktop (RDP) session and connect to the virtual machine.
2. You can then do some basic connectivity checks to ensure that the VM is connecting to SQL Database via
the private endpoint using the following tools:
a. Telnet
b. Psping
c. Nmap
d. SQL Server Management Studio (SSMS)
Check Connectivity using Telnet
Telnet Client is a Windows feature that can be used to test connectivity. Depending on the version of the
Windows OS, you may need to enable this feature explicitly.
Open a Command Prompt window after you have installed Telnet. Run the Telnet command and specify the IP
address and private endpoint of the database in SQL Database.

>telnet 10.9.0.4 1433

When Telnet connects successfully, you'll see a blank screen at the command window like the below image:

Use Powershell command to check the connectivity

Test-NetConnection -computer myserver.database.windows.net -port 1433

Check Connectivity using Psping


Psping can be used as follows to check that the private endpoint is listening for connections on port 1433.
Run psping as follows by providing the FQDN for logical SQL server and port 1433:

>psping.exe mysqldbsrvr.database.windows.net:1433
...
TCP connect to 10.9.0.4:1433:
5 iterations (warmup 1) ping test:
Connecting to 10.9.0.4:1433 (warmup): from 10.6.0.4:49953: 2.83ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49954: 1.26ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49955: 1.98ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49956: 1.43ms
Connecting to 10.9.0.4:1433: from 10.6.0.4:49958: 2.28ms

The output show that Psping could ping the private IP address associated with the private endpoint.
Check connectivity using Nmap
Nmap (Network Mapper) is a free and open-source tool used for network discovery and security auditing. For
more information and the download link, visit https://nmap.org. You can use this tool to ensure that the private
endpoint is listening for connections on port 1433.
Run Nmap as follows by providing the address range of the subnet that hosts the private endpoint.
>nmap -n -sP 10.9.0.0/24
...
Nmap scan report for 10.9.0.4
Host is up (0.00s latency).
Nmap done: 256 IP addresses (1 host up) scanned in 207.00 seconds

The result shows that one IP address is up; which corresponds to the IP address for the private endpoint.
Check connectivity using SQL Server Management Studio (SSMS )

NOTE
Use the Fully Qualified Domain Name (FQDN) of the server in connection strings for your clients (
<server>.database.windows.net ). Any login attempts made directly to the IP address or using the private link FQDN (
<server>.privatelink.database.windows.net ) shall fail. This behavior is by design, since private endpoint routes traffic
to the SQL Gateway in the region and the correct FQDN needs to be specified for logins to succeed.

Follow the steps here to use SSMS to connect to the SQL Database. After you connect to the SQL Database using
SSMS, the following query shall reflect client_net_address that matches the private IP address of the Azure VM
you are connecting from:

select client_net_address from sys.dm_exec_connections


where session_id=@@SPID

Limitations
Connections to private endpoint only support Proxy as the connection policy

On-premises connectivity over private peering


When customers connect to the public endpoint from on-premises machines, their IP address needs to be added
to the IP-based firewall using a Server-level firewall rule. While this model works well for allowing access to
individual machines for dev or test workloads, it's difficult to manage in a production environment.
With Private Link, customers can enable cross-premises access to the private endpoint using ExpressRoute,
private peering, or VPN tunneling. Customers can then disable all access via the public endpoint and not use the
IP-based firewall to allow any IP addresses.

Use cases of Private Link for Azure SQL Database


Clients can connect to the Private endpoint from the same virtual network, peered virtual network in same
region, or via virtual network to virtual network connection across regions. Additionally, clients can connect
from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing
the common use cases.
In addition, services that are not running directly in the virtual network but are integrated with it (for example,
App Service web apps or Functions) can also achieve private connectivity to the database. For more information
on this specific use case, see the Web app with private connectivity to Azure SQL database architecture scenario.

Connecting from an Azure VM in Peered Virtual Network


Configure virtual network peering to establish connectivity to the SQL Database from an Azure VM in a peered
virtual network.

Connecting from an Azure VM in virtual network to virtual network


environment
Configure virtual network to virtual network VPN gateway connection to establish connectivity to a database in
SQL Database from an Azure VM in a different region or subscription.

Connecting from an on-premises environment over VPN


To establish connectivity from an on-premises environment to the database in SQL Database, choose and
implement one of the options:
Point-to-Site connection
Site-to-Site VPN connection
ExpressRoute circuit
Consider DNS configuration scenarios as well, as the FQDN of the service can resolve to the public IP address.

Connecting from Azure Synapse Analytics to Azure Storage using


Polybase and the COPY statement
PolyBase and the COPY statement is commonly used to load data into Azure Synapse Analytics from Azure
Storage accounts. If the Azure Storage account that you're loading data from limits access only to a set of virtual
network subnets via Private Endpoints, Service Endpoints, or IP-based firewalls, the connectivity from PolyBase
and the COPY statement to the account will break. For enabling both import and export scenarios with Azure
Synapse Analytics connecting to Azure Storage that's secured to a virtual network, follow the steps provided
here.

Data exfiltration prevention


Data exfiltration in Azure SQL Database is when a user, such as a database admin is able extract data from one
system and move it another location or system outside the organization. For example, the user moves the data
to a storage account owned by a third party.
Consider a scenario with a user running SQL Server Management Studio (SSMS) inside an Azure virtual
machine connecting to a database in SQL Database. This database is in the West US data center. The example
below shows how to limit access with public endpoints on SQL Database using network access controls.
1. Disable all Azure service traffic to SQL Database via the public endpoint by setting Allow Azure Services to
OFF . Ensure no IP addresses are allowed in the server and database level firewall rules. For more
information, see Azure SQL Database and Azure Synapse Analytics network access controls.
2. Only allow traffic to the database in SQL Database using the Private IP address of the VM. For more
information, see the articles on Service Endpoint and virtual network firewall rules.
3. On the Azure VM, narrow down the scope of outgoing connection by using Network Security Groups (NSGs)
and Service Tags as follows
Specify an NSG rule to allow traffic for Service Tag = SQL.WestUs - only allowing connection to SQL
Database in West US
Specify an NSG rule (with a higher priority ) to deny traffic for Service Tag = SQL - denying
connections to SQL Database in all regions
At the end of this setup, the Azure VM can connect only to a database in SQL Database in the West US region.
However, the connectivity isn't restricted to a single database in SQL Database. The VM can still connect to any
database in the West US region, including the databases that aren't part of the subscription. While we've
reduced the scope of data exfiltration in the above scenario to a specific region, we haven't eliminated it
altogether.
With Private Link, customers can now set up network access controls like NSGs to restrict access to the private
endpoint. Individual Azure PaaS resources are then mapped to specific private endpoints. A malicious insider can
only access the mapped PaaS resource (for example a database in SQL Database) and no other resource.

Next steps
For an overview of Azure SQL Database security, see Securing your database
For an overview of Azure SQL Database connectivity, see Azure SQL Connectivity Architecture
You may also be interested in the Web app with private connectivity to Azure SQL database architecture
scenario, which connects a web application outside of the virtual network to the private endpoint of a
database.
Use virtual network service endpoints and rules for
servers in Azure SQL Database
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


Virtual network rules are a firewall security feature that controls whether the server for your databases and
elastic pools in Azure SQL Database or for your dedicated SQL pool (formerly SQL DW) databases in Azure
Synapse Analytics accepts communications that are sent from particular subnets in virtual networks. This article
explains why virtual network rules are sometimes your best option for securely allowing communication to your
database in SQL Database and Azure Synapse Analytics.

NOTE
This article applies to both SQL Database and Azure Synapse Analytics. For simplicity, the term database refers to both
databases in SQL Database and Azure Synapse Analytics. Likewise, any references to server refer to the logical SQL server
that hosts SQL Database and Azure Synapse Analytics.

To create a virtual network rule, there must first be a virtual network service endpoint for the rule to reference.

Create a virtual network rule


If you want to only create a virtual network rule, you can skip ahead to the steps and explanation later in this
article.

Details about virtual network rules


This section describes several details about virtual network rules.
Only one geographic region
Each virtual network service endpoint applies to only one Azure region. The endpoint doesn't enable other
regions to accept communication from the subnet.
Any virtual network rule is limited to the region that its underlying endpoint applies to.
Server level, not database level
Each virtual network rule applies to your whole server, not just to one particular database on the server. In other
words, virtual network rules apply at the server level, not at the database level.
In contrast, IP rules can apply at either level.
Security administration roles
There's a separation of security roles in the administration of virtual network service endpoints. Action is
required from each of the following roles:
Network Admin ( Network Contributor role): Turn on the endpoint.
Database Admin ( SQL Ser ver Contributor role): Update the access control list (ACL) to add the given
subnet to the server.
Azure RBAC alternative
The roles of Network Admin and Database Admin have more capabilities than are needed to manage virtual
network rules. Only a subset of their capabilities is needed.
You have the option of using role-based access control (RBAC) in Azure to create a single custom role that has
only the necessary subset of capabilities. The custom role could be used instead of involving either the Network
Admin or the Database Admin. The surface area of your security exposure is lower if you add a user to a custom
role versus adding the user to the other two major administrator roles.

NOTE
In some cases, the database in SQL Database and the virtual network subnet are in different subscriptions. In these cases,
you must ensure the following configurations:
Both subscriptions must be in the same Azure Active Directory (Azure AD) tenant.
The user has the required permissions to initiate operations, such as enabling service endpoints and adding a virtual
network subnet to the given server.
Both subscriptions must have the Microsoft.Sql provider registered.

Limitations
For SQL Database, the virtual network rules feature has the following limitations:
In the firewall for your database in SQL Database, each virtual network rule references a subnet. All these
referenced subnets must be hosted in the same geographic region that hosts the database.
Each server can have up to 128 ACL entries for any virtual network.
Virtual network rules apply only to Azure Resource Manager virtual networks and not to classic deployment
model networks.
Turning on virtual network service endpoints to SQL Database also enables the endpoints for Azure Database
for MySQL and Azure Database for PostgreSQL. With endpoints set to ON , attempts to connect from the
endpoints to your Azure Database for MySQL or Azure Database for PostgreSQL instances might fail.
The underlying reason is that Azure Database for MySQL and Azure Database for PostgreSQL likely
don't have a virtual network rule configured. You must configure a virtual network rule for Azure
Database for MySQL and Azure Database for PostgreSQL.
To define virtual network firewall rules on a SQL logical server that's already configured with private
endpoints, set Deny public network access to No .
On the firewall, IP address ranges do apply to the following networking items, but virtual network rules don't:
Site-to-site (S2S) virtual private network (VPN)
On-premises via Azure ExpressRoute
Considerations when you use service endpoints
When you use service endpoints for SQL Database, review the following considerations:
Outbound to Azure SQL Database public IPs is required. Network security groups (NSGs) must be
opened to SQL Database IPs to allow connectivity. You can do this by using NSG service tags for SQL
Database.
ExpressRoute
If you use ExpressRoute from your premises, for public peering or Microsoft peering, you'll need to identify the
NAT IP addresses that are used. For public peering, each ExpressRoute circuit by default uses two NAT IP
addresses applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For
Microsoft peering, the NAT IP addresses that are used are provided by either the customer or the service
provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP
firewall setting. To find your public peering ExpressRoute circuit IP addresses, open a support ticket with
ExpressRoute via the Azure portal. To learn more about NAT for ExpressRoute public and Microsoft peering, see
NAT requirements for Azure public peering.
To allow communication from your circuit to SQL Database, you must create IP network rules for the public IP
addresses of your NAT.

Impact of using virtual network service endpoints with Azure Storage


Azure Storage has implemented the same feature that allows you to limit connectivity to your Azure Storage
account. If you choose to use this feature with an Azure Storage account that SQL Database is using, you can run
into issues. Next is a list and discussion of SQL Database and Azure Synapse Analytics features that are affected
by this.
Azure Synapse Analytics PolyBase and COPY statement
PolyBase and the COPY statement are commonly used to load data into Azure Synapse Analytics from Azure
Storage accounts for high throughput data ingestion. If the Azure Storage account that you're loading data from
limits accesses only to a set of virtual network subnets, connectivity when you use PolyBase and the COPY
statement to the storage account will break. For enabling import and export scenarios by using COPY and
PolyBase with Azure Synapse Analytics connecting to Azure Storage that's secured to a virtual network, follow
the steps in this section.
Prerequisites
Install Azure PowerShell by using this guide.
If you have a general-purpose v1 or Azure Blob Storage account, you must first upgrade to general-purpose
v2 by following the steps in Upgrade to a general-purpose v2 storage account.
You must have Allow trusted Microsoft ser vices to access this storage account turned on under the
Azure Storage account Firewalls and Vir tual networks settings menu. Enabling this configuration will
allow PolyBase and the COPY statement to connect to the storage account by using strong authentication
where network traffic remains on the Azure backbone. For more information, see this guide.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by SQL Database, but all future development is for the
Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for
the commands in the Az module and in the AzureRm modules are substantially identical. For more about their
compatibility, see Introducing the new Azure PowerShell Az module.

Steps
1. If you have a standalone dedicated SQL pool, register your SQL server with Azure AD by using
PowerShell:

Connect-AzAccount
Select-AzSubscription -SubscriptionId <subscriptionId>
Set-AzSqlServer -ResourceGroupName your-database-server-resourceGroup -ServerName your-SQL-servername
-AssignIdentity

This step isn't required for the dedicated SQL pools within an Azure Synapse Analytics workspace. The
system assigned managed identity (SA-MI) of the workspace is a member of the Synapse Administrator
role and thus has elevated privileges on the dedicated SQL pools of the workspace.
2. Create a general-purpose v2 Storage Account by following the steps in Create a storage account.
If you have a general-purpose v1 or Blob Storage account, you must first upgrade to v2 by following
the steps in Upgrade to a general-purpose v2 storage account.
For known issues with Azure Data Lake Storage Gen2, see Known issues with Azure Data Lake Storage
Gen2.
3. On your storage account page, select Access control (IAM) .
4. Select Add > Add role assignment to open the Add role assignment page.
5. Assign the following role. For detailed steps, see Assign Azure roles using the Azure portal.

SET T IN G VA L UE

Role Storage Blob Data Contributor

Assign access to User, group, or service principal

Members Server or workspace hosting your dedicated SQL pool


that you've registered with Azure AD

NOTE
Only members with Owner privilege on the storage account can perform this step. For various Azure built-in roles,
see Azure built-in roles.

6. To enable PolyBase connectivity to the Azure Storage account:


a. Create a database master key if you haven't created one earlier.

CREATE MASTER KEY [ENCRYPTION BY PASSWORD = 'somepassword'];

b. Create a database-scoped credential with IDENTITY = 'Managed Ser vice Identity' .

CREATE DATABASE SCOPED CREDENTIAL msi_cred WITH IDENTITY = 'Managed Service Identity';

There's no need to specify SECRET with an Azure Storage access key because this
mechanism uses Managed Identity under the covers. This step isn't required for the
dedicated SQL pools within an Azure Synapse Analytics workspace. The system assigned
managed identity (SA-MI) of the workspace is a member of the Synapse Administrator role
and thus has elevated privileges on the dedicated SQL pools of the workspace.
The IDENTITY name must be 'Managed Ser vice Identity' for PolyBase connectivity to
work with an Azure Storage account secured to a virtual network.
c. Create an external data source with the abfss:// scheme for connecting to your general-purpose
v2 storage account using PolyBase.

CREATE EXTERNAL DATA SOURCE ext_datasource_with_abfss WITH (TYPE = hadoop, LOCATION =


'abfss://myfile@mystorageaccount.dfs.core.windows.net', CREDENTIAL = msi_cred);

If you already have external tables associated with a general-purpose v1 or Blob Storage
account, you should first drop those external tables. Then drop the corresponding external data
source. Next, create an external data source with the abfss:// scheme that connects to a
general-purpose v2 storage account, as previously shown. Then re-create all the external tables
by using this new external data source. You could use the Generate and Publish Scripts Wizard
to generate create-scripts for all the external tables for ease.
For more information on the abfss:// scheme, see Use the Azure Data Lake Storage Gen2 URI.
For more information on the T-SQL commands, see CREATE EXTERNAL DATA SOURCE.
d. Query as normal by using external tables.
SQL Database blob auditing
Azure SQL auditing can write SQL audit logs to your own storage account. If this storage account uses the
virtual network service endpoints feature, see how to write audit to a storage account behind VNet and firewall.

Add a virtual network firewall rule to your Azure SQL server


Long ago, before this feature was enhanced, you were required to turn on virtual network service endpoints
before you could implement a live virtual network rule in the firewall. The endpoints related a given virtual
network subnet to a database in SQL Database. As of January 2018, you can circumvent this requirement by
setting the IgnoreMissingVNetSer viceEndpoint flag. Now, you can add a virtual network firewall rule to
your server without turning on virtual network service endpoints.
Merely setting a firewall rule doesn't help secure the server. You must also turn on virtual network service
endpoints for the security to take effect. When you turn on service endpoints, your virtual network subnet
experiences downtime until it completes the transition from turned off to on. This period of downtime is
especially true in the context of large virtual networks. You can use the IgnoreMissingVNetSer viceEndpoint
flag to reduce or eliminate the downtime during transition.
You can set the IgnoreMissingVNetSer viceEndpoint flag by using PowerShell. For more information, see
PowerShell to create a virtual network service endpoint and rule for SQL Database.

NOTE
For similar instructions in Azure Synapse Analytics, see Azure Synapse Analytics IP firewall rules

Use Azure portal to create a virtual network rule


This section illustrates how you can use the Azure portal to create a virtual network rule in your database in SQL
Database. The rule tells your database to accept communication from a particular subnet that's been tagged as
being a virtual network service endpoint.
NOTE
If you intend to add a service endpoint to the virtual network firewall rules of your server, first ensure that service
endpoints are turned on for the subnet.
If service endpoints aren't turned on for the subnet, the portal asks you to enable them. Select the Enable button on the
same pane on which you add the rule.

Prerequisites
You must already have a subnet that's tagged with the particular virtual network service endpoint type name
relevant to SQL Database.
The relevant endpoint type name is Microsoft.Sql .
If your subnet might not be tagged with the type name, see Verify your subnet is an endpoint.

Azure portal steps


1. Sign in to the Azure portal.
2. Search for and select SQL ser vers , and then select your server. Under Security , select Firewalls and
vir tual networks .

3. Set Allow Azure ser vices and resources to access this ser ver to No .

IMPORTANT
If you leave the control set to ON, your server accepts communication from any subnet inside the Azure
boundary. That is communication that originates from one of the IP addresses that's recognized as those within
ranges defined for Azure datacenters. Leaving the control set to ON might be excessive access from a security
point of view. The Microsoft Azure Virtual Network service endpoint feature in coordination with the virtual
network rules feature of SQL Database together can reduce your security surface area.

4. Select + Add existing vir tual network in the Vir tual networks section.
5. In the new Create/Update pane, fill in the boxes with the names of your Azure resources.

TIP
You must include the correct address prefix for your subnet. You can find the Address prefix value in the portal.
Go to All resources > All types > Vir tual networks . The filter displays your virtual networks. Select your
virtual network, and then select Subnets . The ADDRESS RANGE column has the address prefix you need.
6. Select the OK button near the bottom of the pane.
7. See the resulting virtual network rule on the Firewall pane.

NOTE
The following statuses or states apply to the rules:
Ready : Indicates that the operation you initiated has succeeded.
Failed : Indicates that the operation you initiated has failed.
Deleted : Only applies to the Delete operation and indicates that the rule has been deleted and no longer applies.
InProgress : Indicates that the operation is in progress. The old rule applies while the operation is in this state.

Use PowerShell to create a virtual network rule


A script can also create virtual network rules by using the PowerShell cmdlet New-AzSqlServerVirtualNetworkRule
or az network vnet create. For more information, see PowerShell to create a virtual network service endpoint
and rule for SQL Database.

Use REST API to create a virtual network rule


Internally, the PowerShell cmdlets for SQL virtual network actions call REST APIs. You can call the REST APIs
directly. For more information, see Virtual network rules: Operations.

Troubleshoot errors 40914 and 40615


Connection error 40914 relates to virtual network rules, as specified on the Firewall pane in the Azure portal.
Error 40615 is similar, except it relates to IP address rules on the firewall.
Error 40914
Message text: "Cannot open server '[server-name]' requested by the login. Client is not allowed to access the
server."
Error description: The client is in a subnet that has virtual network server endpoints. But the server has no
virtual network rule that grants to the subnet the right to communicate with the database.
Error resolution: On the Firewall pane of the Azure portal, use the virtual network rules control to add a
virtual network rule for the subnet.
Error 40615
Message text: "Cannot open server '{0}' requested by the login. Client with IP address '{1}' is not allowed to
access the server."
Error description: The client is trying to connect from an IP address that isn't authorized to connect to the
server. The server firewall has no IP address rule that allows a client to communicate from the given IP address
to the database.
Error resolution: Enter the client's IP address as an IP rule. Use the Firewall pane in the Azure portal to do this
step.

Related articles
Azure virtual network service endpoints
Server-level and database-level firewall rules

Next steps
Use PowerShell to create a virtual network service endpoint and then a virtual network rule for SQL
Database
Virtual network rules: Operations with REST APIs
Azure SQL Database server roles for permission
management
7/12/2022 • 8 minutes to read • Edit Online

NOTE
The fixed server-level roles in this article are in public preview for Azure SQL Database. These server-level roles are also
part of the release for SQL Server 2022.

APPLIES TO: Azure SQL Database


In Azure SQL Database, the server is a logical concept and permissions can't be granted on a server level. To
simplify permission management, Azure SQL Database provides a set of fixed server-level roles to help you
manage the permissions on a logical server. Roles are security principals that group logins.

NOTE
The roles concept in this article are like groups in the Windows operating system.

These special fixed server-level roles use the prefix ##MS_ and the suffix ## to distinguish from other regular
user-created principals.
Like SQL Server on-premises, server permissions are organized hierarchically. The permissions that are held by
these server-level roles can propagate to database permissions. For the permissions to be effectively useful at
the database level, a login needs to either be a member of the server-level role ##MS_DatabaseConnector## ,
which grants CONNECT to all databases, or have a user account in individual databases. This also applies to the
virtual master database.
For example, the server-level role ##MS_Ser verStateReader## holds the permission VIEW SERVER STATE .
If a login who is member of this role has a user account in the databases master and WideWorldImporters , this
user will have the permission, VIEW DATABASE STATE in those two databases.

NOTE
Any permission can be denied within user databases, in effect, overriding the server-wide grant via role membership.
However, in the system database master, permissions cannot be granted or denied.

Azure SQL Database currently provides 7 fixed server roles. The permissions that are granted to the fixed server
roles can't be changed and these roles can't have other fixed roles as members. You can add server-level logins
as members to server-level roles.

IMPORTANT
Each member of a fixed server role can add other logins to that same role.

For more information on Azure SQL Database logins and users, see Authorize database access to SQL Database,
SQL Managed Instance, and Azure Synapse Analytics.
Fixed server-level roles
The following table shows the fixed server-level roles and their capabilities.

F IXED SERVER- L EVEL RO L E DESC RIP T IO N

##MS_DatabaseConnector## Members of the##MS_DatabaseConnector## fixed server


role can connect to any database without requiring a User-
account in the database to connect to.

To deny the CONNECT permission to a specific database,


users can create a matching user account for this login in the
database and then DENY the CONNECT permission to the
database-user. This DENY permission will overrule the
GRANT CONNECT permission coming from this role.

##MS_DatabaseManager## Members of the##MS_DatabaseManager## fixed server


role can create and delete databases. A member of the
##MS_DatabaseManager## role that creates a database,
becomes the owner of that database, which allows that user
to connect to that database as the dbo user. The dbo user
has all database permissions in the database. Members of
the ##MS_DatabaseManager## role don't necessarily
have permission to access databases that they don't own. It's
recommended to use this server role over the dbmanager
database level role that exists in master .

##MS_DefinitionReader## Members of the ##MS_DefinitionReader## fixed server


role can read all catalog views that are covered by VIEW
ANY DEFINITION, respectively VIEW DEFINITION on
any database on which the member of this role has a user
account.

##MS_LoginManager## Members of the##MS_LoginManager## fixed server role


can create and delete logins. It's recommended to use this
server role over the loginmanager database level role that
exists in master .

##MS_SecurityDefinitionReader## Members of the##MS_SecurityDefinitionReader## fixed


server role can read all catalog views that are covered
byVIEW ANY SECURITY DEFINITION, and
respectivelyhas VIEW SECURITY DEFINITIONpermission
on any database on which the member of this role has a
user account. This is a small subset of what the
##MS_DefinitionReader## server role has access to.

##MS_Ser verStateReader## Members of the ##MS_Ser verStateReader## fixed server


role can read all dynamic management views (DMVs) and
functions that are covered by VIEW SERVER STATE ,
respectively VIEW DATABASE STATE on any database on
which the member of this role has a user account.

##MS_Ser verStateManager## Members of the ##MS_Ser verStateManager## fixed


server role have the same permissions as the
##MS_Ser verStateReader## role. Also, it holds the
ALTER SERVER STATE permission, which allows access to
several management operations, such as:
DBCC FREEPROCCACHE , DBCC FREESYSTEMCACHE ('ALL') ,
DBCC SQLPERF() ;
Permissions of fixed server roles
Each fixed server-level role has certain permissions assigned to it. The following table shows the permissions
assigned to the server-level roles. It also shows the inherited database-level permissions as long as the user can
connect to individual databases.

DATA B A SE- L EVEL P ERM ISSIO N S ( IF A


DATA B A SE USER M ATC H IN G T H E
F IXED SERVER- L EVEL RO L E SERVER- L EVEL P ERM ISSIO N S LO GIN EXIST S)

##MS_DatabaseConnector## CONNECT ANY DATABASE CONNECT

##MS_DatabaseManager## CREATE ANY DATABASE ALTER


ALTER ANY DATABASE

##MS_DefinitionReader## VIEW ANY DATABASE, VIEW ANY VIEW DEFINITION, VIEW SECURITY
DEFINITION, VIEW ANY SECURITY DEFINITION
DEFINITION

##MS_LoginManager## CREATE LOGIN N/A


ALTER ANY LOGIN

##MS_SecurityDefinitionReader## VIEW ANY SECURITY DEFINITION VIEW SECURITY DEFINITION

##MS_Ser verStateReader## VIEW SERVER STATE, VIEW SERVER VIEW DATABASE STATE, VIEW
PERFORMANCE STATE, VIEW SERVER DATABASE PERFORMANCE STATE,
SECURITY STATE VIEW DATABASE SECURITY STATE

##MS_Ser verStateManager## ALTER SERVER STATE, VIEW SERVER VIEW DATABASE STATE, VIEW
STATE, VIEW SERVER PERFORMANCE DATABASE PERFORMANCE STATE,
STATE, VIEW SERVER SECURITY STATE VIEW DATABASE SECURITY STATE

Working with server-level roles


The following table explains the system views, and functions that you can use to work with server-level roles in
Azure SQL Database.

F EAT URE TYPE DESC RIP T IO N

IS_SRVROLEMEMBER (Transact-SQL) Metadata Indicates whether a SQL login is a


member of the specified server-level
role.

sys.server_role_members (Transact- Metadata Returns one row for each member of


SQL) each server-level role.

sys.sql_logins (Transact-SQL) Metadata Returns one row for each SQL login.

ALTER SERVER ROLE (Transact-SQL) Command Changes the membership of a server


role.

Examples
The examples in this section show how to work with server-level roles in Azure SQL Database.
A. Adding a SQL login to a server-level role
The following example adds the SQL login 'Jiao' to the server-level role ##MS_ServerStateReader##. This
statement has to be run in the virtual master database.

ALTER SERVER ROLE ##MS_ServerStateReader##


ADD MEMBER Jiao;
GO

B. Listing all principals (SQL authentication) which are members of a server-level role
The following statement returns all members of any fixed server-level role using the sys.server_role_members
and sys.sql_logins catalog views. This statement has to be run in the virtual master database.

SELECT
sql_logins.principal_id AS MemberPrincipalID
, sql_logins.name AS MemberPrincipalName
, roles.principal_id AS RolePrincipalID
, roles.name AS RolePrincipalName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id
;
GO

C. Complete example: Adding a login to a server-level role, retrieving metadata for role membership and
permissions, and running a test query
Part 1: Preparing role membership and user account
Run this command from the virtual master database.

ALTER SERVER ROLE ##MS_ServerStateReader##


ADD MEMBER Jiao

-- check membership in metadata:


select IS_SRVROLEMEMBER('##MS_ServerStateReader##', 'Jiao')
--> 1 = Yes

SELECT
sql_logins.principal_id AS MemberPrincipalID
, sql_logins.name AS MemberPrincipalName
, roles.principal_id AS RolePrincipalID
, roles.name AS RolePrincipalName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.sql_logins AS sql_logins
ON server_role_members.member_principal_id = sql_logins.principal_id
;
GO

Here's the result set.

MemberPrincipalID MemberPrincipalName RolePrincipalID RolePrincipalName


------------- ------------- ------------------ -----------
6 Jiao 11 ##MS_ServerStateReader##

Run this command from a user database.


-- Creating a database-User for 'Jiao'
CREATE USER Jiao
FROM LOGIN Jiao
;
GO

Part 2: Testing role membership


Log in as login Jiao and connect to the user database used in the example.

-- retrieve server-level permissions of currently logged on User


SELECT * FROM sys.fn_my_permissions(NULL, 'Server')
;

-- check server-role membership for `##MS_ServerStateReader##` of currently logged on User


SELECT USER_NAME(), IS_SRVROLEMEMBER('##MS_ServerStateReader##')
--> 1 = Yes

-- Does the currently logged in User have the `VIEW DATABASE STATE`-permission?
SELECT HAS_PERMS_BY_NAME(NULL, 'DATABASE', 'VIEW DATABASE STATE');
--> 1 = Yes

-- retrieve database-level permissions of currently logged on User


SELECT * FROM sys.fn_my_permissions(NULL, 'DATABASE')
GO

-- example query:
SELECT * FROM sys.dm_exec_query_stats
--> will return data since this user has the necessary permission

D. Check server-level roles for Azure AD logins


Run this command in the virtual master database to see all Azure AD logins that are part of server-level roles in
SQL Database. For more information on Azure AD server logins, see Azure Active Directory server principals.

SELECT roles.principal_id AS RolePID,roles.name AS RolePName,


server_role_members.member_principal_id AS MemberPID, members.name AS MemberPName
FROM sys.server_role_members AS server_role_members
INNER JOIN sys.server_principals AS roles
ON server_role_members.role_principal_id = roles.principal_id
INNER JOIN sys.server_principals AS members
ON server_role_members.member_principal_id = members.principal_id;

E. Check the virtual master database roles for specific logins


Run this command in the virtual master database to check with roles bob has, or change the value to match
your principal.

SELECT DR1.name AS DbRoleName, isnull (DR2.name, 'No members') AS DbUserName


FROM sys.database_role_members AS DbRMem RIGHT OUTER JOIN sys.database_principals AS DR1
ON DbRMem.role_principal_id = DR1.principal_id LEFT OUTER JOIN sys.database_principals AS DR2
ON DbRMem.member_principal_id = DR2.principal_id
WHERE DR1.type = 'R' and DR2.name like 'bob%'

Limitations of server-level roles


Role assignments may take up to 5 minutes to become effective. Also for existing sessions, changes to
server role assignments don't take effect until the connection is closed and reopened. This is due to the
distributed architecture between the master database and other databases on the same logical server.
Partial workaround: to reduce the waiting period and ensure that server role assignments are current
in a database, a server administrator, or an Azure AD administrator can run DBCC FLUSHAUTHCACHE in
the user database(s) on which the login has access. Current logged on users still have to reconnect
after running DBCC FLUSHAUTHCACHE for the membership changes to take effect on them.
IS_SRVROLEMEMBER() isn't supported in the master database.

See also
Database-Level Roles
Security Catalog Views (Transact-SQL)
Security Functions (Transact-SQL)
Permissions (Database Engine)
DBCC FLUSHAUTHCACHE (Transact-SQL)
Active geo-replication
7/12/2022 • 19 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Active geo-replication is a feature that lets you to create a continuously synchronized readable secondary
database for a primary database. The readable secondary database may be in the same Azure region as the
primary, or, more commonly, in a different region. This kind of readable secondary databases are also known as
geo-secondaries, or geo-replicas.
Active geo-replication is designed as a business continuity solution that lets you perform quick disaster recovery
of individual databases in case of a regional disaster or a large scale outage. Once geo-replication is set up, you
can initiate a geo-failover to a geo-secondary in a different Azure region. The geo-failover is initiated
programmatically by the application or manually by the user.

NOTE
Active geo-replication is not supported by Azure SQL Managed Instance. For geographic failover of instances of SQL
Managed Instance, use Auto-failover groups.

NOTE
To migrate SQL databases from Azure Germany using active geo-replication, see Migrate SQL Database using active geo-
replication.

If your application requires a stable connection endpoint and automatic geo-failover support in addition to geo-
replication, use Auto-failover groups.
The following diagram illustrates a typical configuration of a geo-redundant cloud application using Active geo-
replication.
If for any reason your primary database fails, you can initiate a geo-failover to any of your secondary databases.
When a secondary is promoted to the primary role, all other secondaries are automatically linked to the new
primary.
You can manage geo-replication and initiate a geo-failover using the following:
The Azure portal
PowerShell: Single database
PowerShell: Elastic pool
Transact-SQL: Single database or elastic pool
REST API: Single database
Active geo-replication leverages the Always On availability group technology to asynchronously replicate
transaction log generated on the primary replica to all geo-replicas. While at any given point, a secondary
database might be slightly behind the primary database, the data on a secondary is guaranteed to be
transactionally consistent. In other words, changes made by uncommitted transactions are not visible.

NOTE
Active geo-replication replicates changes by streaming database transaction log from the primary replica to secondary
replicas. It is unrelated to transactional replication, which replicates changes by executing DML (INSERT, UPDATE, DELETE)
commands on subscribers.

Regional redundancy provided by geo-replication enables applications to quickly recover from a permanent loss
of an entire Azure region, or parts of a region, caused by natural disasters, catastrophic human errors, or
malicious acts. Geo-replication RPO can be found in Overview of Business Continuity.
The following figure shows an example of active geo-replication configured with a primary in the North Central
US region and a geo-secondary in the South Central US region.

In addition to disaster recovery, active geo-replication can be used in the following scenarios:
Database migration : You can use active geo-replication to migrate a database from one server to another
with minimum downtime.
Application upgrades : You can create an extra secondary as a fail back copy during application upgrades.
To achieve full business continuity, adding database regional redundancy is only a part of the solution.
Recovering an application (service) end-to-end after a catastrophic failure requires recovery of all components
that constitute the service and any dependent services. Examples of these components include the client
software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that
all components are resilient to the same failures and become available within the recovery time objective (RTO)
of your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends. For more information about designing solutions for disaster
recovery, see Designing Cloud Solutions for Disaster Recovery Using active geo-replication.

Active geo-replication terminology and capabilities


Automatic asynchronous replication
You can only create a geo-secondary for an existing database. The geo-secondary can be created on any
logical server, other than the server with the primary database. Once created, the geo-secondary replica
is populated with the data of the primary database. This process is known as seeding. After a geo-
secondary has been created and seeded, updates to the primary database are automatically and
asynchronously replicated to the geo-secondary replica. Asynchronous replication means that
transactions are committed on the primary database before they are replicated.
Readable geo-secondar y replicas
An application can access a geo-secondary replica to execute read-only queries using the same or
different security principals used for accessing the primary database. For more information, see Use
read-only replicas to offload read-only query workloads.

IMPORTANT
You can use geo-replication to create secondary replicas in the same region as the primary. You can use these
secondaries to satisfy read scale-out scenarios in the same region. However, a secondary replica in the same
region does not provide additional resilience to catastrophic failures or large scale outages, and therefore is not a
suitable failover target for disaster recovery purposes. It also does not guarantee availability zone isolation. Use
Business Critical or Premium service tiers zone redundant configuration or General Purpose service tier zone
redundant configuration to achieve availability zone isolation.

Planned geo-failover
Planned geo-failover switches the roles of primary and geo-secondary databases after completing full
data synchronization. A planned failover does not result in data loss. The duration of planned geo-failover
depends on the size of transaction log on the primary that needs to be synchronized to the geo-
secondary. Planned geo-failover is designed for the following scenarios:
Perform DR drills in production when the data loss is not acceptable;
Relocate the database to a different region;
Return the database to the primary region after the outage has been mitigated (known as failback).
Unplanned geo-failover
Unplanned, or forced, geo-failover immediately switches the geo-secondary to the primary role without
any synchronization with the primary. Any transactions committed on the primary but not yet replicated
to the secondary are lost. This operation is designed as a recovery method during outages when the
primary is not accessible, but database availability must be quickly restored. When the original primary is
back online, it will be automatically re-connected, reseeded using the current primary data, and become a
new geo-secondary.

IMPORTANT
After either planned or unplanned geo-failover, the connection endpoint for the new primary changes because the
new primary is now located on a different logical server.

Multiple readable geo-secondaries


Up to four geo-secondaries can be created for a primary. If there is only one secondary, and it fails, the
application is exposed to higher risk until a new secondary is created. If multiple secondaries exist, the
application remains protected even if one of the secondaries fails. Additional secondaries can also be
used to scale out read-only workloads.
TIP
If you are using active geo-replication to build a globally distributed application and need to provide read-only
access to data in more than four regions, you can create a secondary of a secondary (a process known as
chaining) to create additional geo-replicas. Replication lag on chained geo-replicas may be higher than on geo-
replicas connected directly to the primary. Setting up chained geo-replication topologies is only supported
programmatically, and not from Azure portal.

Geo-replication of databases in an elastic pool


Each geo-secondary can be a single database or a database in an elastic pool. The elastic pool choice for
each geo-secondary database is separate and does not depend on the configuration of any other replica
in the topology (either primary or secondary). Each elastic pool is contained within a single logical server.
Because database names on a logical server must be unique, multiple geo-secondaries of the same
primary can never share an elastic pool.
User-controlled geo-failover and failback
A geo-secondary that has finished initial seeding can be explicitly switched to the primary role (failed
over) at any time by the application or the user. During an outage where the primary is inaccessible, only
an unplanned geo-failover can be used. That immediately promotes a geo-secondary to be the new
primary. When the outage is mitigated, the system automatically makes the recovered primary a geo-
secondary, and brings it up-to-date with the new primary. Due to the asynchronous nature of geo-
replication, recent transactions may be lost during unplanned geo-failovers if the primary fails before
these transactions are replicated to a geo-secondary. When a primary with multiple geo-secondaries fails
over, the system automatically reconfigures replication relationships and links the remaining geo-
secondaries to the newly promoted primary, without requiring any user intervention. After the outage
that caused the geo-failover is mitigated, it may be desirable to return the primary to its original region.
To do that, invoke a planned geo-failover.

Prepare for geo-failover


To ensure that your application can immediately access the new primary after geo-failover, validate that
authentication and network access for your secondary server are properly configured. For details, see SQL
Database security after disaster recovery. Also validate that backup retention policy on the secondary database
matches that of the primary. This setting is not a part of the database and is not replicated from the primary. By
default, the geo-secondary is configured with a default PITR retention period of seven days. For details, see SQL
Database automated backups.

IMPORTANT
If your database is a member of a failover group, you cannot initiate its failover using the geo-replication failover
command. Use the failover command for the group. If you need to failover an individual database, you must remove it
from the failover group first. See Auto-failover groups for details.

Configure geo-secondary
Both primary and geo-secondary are required to have the same service tier. It is also strongly recommended
that the geo-secondary is configured with the same backup storage redundancy and compute size (DTUs or
vCores) as the primary. If the primary is experiencing a heavy write workload, a geo-secondary with a lower
compute size may not be able to keep up. That will cause replication lag on the geo-secondary, and may
eventually cause unavailability of the geo-secondary. To mitigate these risks, active geo-replication will reduce
(throttle) the primary's transaction log rate if necessary to allow its secondaries to catch up.
Another consequence of an imbalanced geo-secondary configuration is that after failover, application
performance may suffer due to insufficient compute capacity of the new primary. In that case, it will be
necessary to scale up the database to have sufficient resources, which may take significant time, and will require
a high availability failover at the end of the scale up process, which may interrupt application workloads.
If you decide to create the geo-secondary with a lower compute size, you should monitor log IO rate on the
primary over time. This lets you estimate the minimal compute size of the geo-secondary required to sustain
the replication load. For example, if your primary database is P6 (1000 DTU) and its log IO is sustained at 50%,
the geo-secondary needs to be at least P4 (500 DTU). To retrieve historical log IO data, use the sys.resource_stats
view. To retrieve recent log IO data with higher granularity that better reflects short-term spikes, use the
sys.dm_db_resource_stats view.

TIP
Transaction log IO throttling on the primary due to lower compute size on a geo-secondary is reported using the
HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO wait type, visible in the sys.dm_exec_requests and sys.dm_os_wait_stats
database views.
Transaction log IO on the primary may be throttled for reasons unrelated to lower compute size on a geo-secondary. This
kind of throttling may occur even if the geo-secondary has the same or higher compute size than the primary. For details,
including wait types for different kinds of log IO throttling, see Transaction log rate governance.

By default, backup storage redundancy of the geo-secondary is same as for the primary database. You can
choose to configure a geo-secondary with a different backup storage redundancy. Backups are always taken on
the primary database. If the secondary is configured with a different backup storage redundancy, then after a
geo-failover, when the geo-secondary is promoted to the primary, new backups will be stored and billed
according to the type of storage (RA-GRS, ZRS, LRS) selected on the new primary (previous secondary).

Cross-subscription geo-replication
To create a geo-secondary in a subscription different from the subscription of the primary (whether under the
same Azure Active Directory tenant or not), follow the steps in this section.
1. Add the IP address of the client machine executing the T-SQL commands below to the server firewalls of
both the primary and secondary servers. You can confirm that IP address by executing the following
query while connected to the primary server from the same client machine.

select client_net_address from sys.dm_exec_connections where session_id = @@SPID;

For more information see, Configure firewall.


2. In the master database on the primar y server, create a SQL authentication login dedicated to active geo-
replication setup. Adjust login name and password as needed.

create login geodrsetup with password = 'ComplexPassword01';

3. In the same database, create a user for the login, and add it to the dbmanager role:

create user geodrsetup for login geodrsetup;


alter role dbmanager add member geodrsetup;

4. Take note of the SID value of the new login. Obtain the SID value using the following query.
select sid from sys.sql_logins where name = 'geodrsetup';

5. Connect to the primar y database (not the master database), and create a user for the same login.

create user geodrsetup for login geodrsetup;

6. In the same database, add the user to the db_owner role.

alter role db_owner add member geodrsetup;

7. In the master database on the secondar y server, create the same login as on the primary server, using
the same name, password, and SID. Replace the hexadecimal SID value in the sample command below
with the one obtained in Step 4.

create login geodrsetup with password = 'ComplexPassword01',


sid=0x010600000000006400000000000000001C98F52B95D9C84BBBA8578FACE37C3E;

8. In the same database, create a user for the login, and add it to the dbmanager role.

create user geodrsetup for login geodrsetup;


alter role dbmanager add member geodrsetup;

9. Connect to the master database on the primar y server using the new geodrsetup login, and initiate geo-
secondary creation on the secondary server. Adjust database name and secondary server name as
needed. Once the command is executed, you can monitor geo-secondary creation by querying the
sys.dm_geo_replication_link_status view in the primar y database, and the sys.dm_operation_status view
in the master database on the primar y server. The time needed to create a geo-secondary depends on
the primary database size.

alter database [dbrep] add secondary on server [servername];

10. After the geo-secondary is successfully created, the users, logins, and firewall rules created by this
procedure can be removed.

NOTE
Cross-subscription geo-replication operations including setup and geo-failover are only supported using REST API & T-
SQL commands.
Adding a geo-secondary using T-SQL is not supported when connecting to the primary server over a private endpoint. If
a private endpoint is configured but public network access is allowed, adding a geo-secondary is supported when
connected to the primary server from a public IP address. Once a geo-secondary is added, public access can be denied.
Creating a geo-secondary on a logical server in a different Azure tenant is not supported when Azure Active Directory
only authentication for Azure SQL is active (enabled) on either primary or secondary logical server.

Keep credentials and firewall rules in sync


When using public network access for connecting to the database, we recommend using database-level IP
firewall rules for geo-replicated databases. These rules are replicated with the database, which ensures that all
geo-secondaries have the same IP firewall rules as the primary. This approach eliminates the need for customers
to manually configure and maintain firewall rules on servers hosting the primary and secondary databases.
Similarly, using contained database users for data access ensures both primary and secondary databases always
have the same authentication credentials. This way, after a geo-failover, there is no disruptions due to
authentication credential mismatches. If you are using logins and users (rather than contained users), you must
take extra steps to ensure that the same logins exist for your secondary database. For configuration details see
How to configure logins and users.

Scale primary database


You can scale up or scale down the primary database to a different compute size (within the same service tier)
without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-
secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary
first, and then scale down the secondary.

NOTE
If you created a geo-secondary as part of failover group configuration, it is not recommended to scale it down. This is to
ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.

IMPORTANT
The primary database in a failover group can't scale to a higher service tier (edition) unless the secondary database is first
scaled to the higher tier. For example, if you want to scale up the primary from General Purpose to Business Critical, you
have to first scale the geo-secondary to Business Critical. If you try to scale the primary or geo-secondary in a way that
violates this rule, you will receive the following error:
The source database 'Primaryserver.DBName' cannot have higher edition than the target database
'Secondaryserver.DBName'. Upgrade the edition on the target before upgrading the source.

Prevent loss of critical data


Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism.
Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical
transactions from data loss, an application developer can call the sp_wait_for_database_copy_sync stored
procedure immediately after committing the transaction. Calling sp_wait_for_database_copy_sync blocks the
calling thread until the last committed transaction has been transmitted and hardened in the transaction log of
the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on
the secondary. sp_wait_for_database_copy_sync is scoped to a specific geo-replication link. Any user with the
connection rights to the primary database can call this procedure.

NOTE
sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee
full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be
significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.

Monitor geo-replication lag


To monitor lag with respect to RPO, use replication_lag_sec column of sys.dm_geo_replication_link_status on the
primary database. It shows lag in seconds between the transactions committed on the primary, and hardened to
the transaction log on the secondary. For example, if the lag is one second, it means that if the primary is
impacted by an outage at this moment and a geo-failover is initiated, transactions committed in the last second
will be lost.
To measure lag with respect to changes on the primary database that have been hardened on the geo-
secondary, compare last_commit time on the geo-secondary with the same value on the primary.

TIP
If replication_lag_sec on the primary is NULL, it means that the primary does not currently know how far behind a geo-
secondary is. This typically happens after process restarts and should be a transient condition. Consider sending an alert if
replication_lag_sec returns NULL for an extended period of time. It may indicate that the geo-secondary cannot
communicate with the primary due to a connectivity failure.
There are also conditions that could cause the difference between last_commit time on the geo-secondary and on the
primary to become large. For example, if a commit is made on the primary after a long period of no changes, the
difference will jump up to a large value before quickly returning to zero. Consider sending an alert if the difference
between these two values remains large for a long time.

Programmatically manage active geo-replication


As discussed previously, active geo-replication can also be managed programmatically using T-SQL, Azure
PowerShell, and REST API. The following tables describe the set of commands available. Active geo-replication
includes a set of Azure Resource Manager APIs for management, including the Azure SQL Database REST API
and Azure PowerShell cmdlets. These APIs support Azure role-based access control (Azure RBAC). For more
information on how to implement access roles, see Azure role-based access control (Azure RBAC).
T -SQL: Manage geo -failover of single and pooled databases

IMPORTANT
These T-SQL commands only apply to active geo-replication and do not apply to failover groups. As such, they also do
not apply to SQL Managed Instance, which only supports failover groups.

C OMMAND DESC RIP T IO N

ALTER DATABASE Use ADD SECONDARY ON SERVER argument to create a


secondary database for an existing database and starts data
replication

ALTER DATABASE Use FAILOVER or


FORCE_FAILOVER_ALLOW_DATA_LOSS to switch a
secondary database to be primary to initiate failover

ALTER DATABASE Use REMOVE SECONDARY ON SERVER to terminate a


data replication between a SQL Database and the specified
secondary database.

sys.geo_replication_links Returns information about all existing replication links for


each database on a server.

sys.dm_geo_replication_link_status Gets the last replication time, last replication lag, and other
information about the replication link for a given database.

sys.dm_operation_status Shows the status for all database operations including


changes to replication links.
C OMMAND DESC RIP T IO N

sys.sp_wait_for_database_copy_sync Causes the application to wait until all committed


transactions are hardened to the transaction log of a geo-
secondary.

PowerShell: Manage geo -failover of single and pooled databases

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

C M DL ET DESC RIP T IO N

Get-AzSqlDatabase Gets one or more databases.

New-AzSqlDatabaseSecondary Creates a secondary database for an existing database and


starts data replication.

Set-AzSqlDatabaseSecondary Switches a secondary database to be primary to initiate


failover.

Remove-AzSqlDatabaseSecondary Terminates data replication between a SQL Database and the


specified secondary database.

Get-AzSqlDatabaseReplicationLink Gets the geo-replication links for a database.

TIP
For sample scripts, see Configure and failover a single database using active geo-replication and Configure and failover a
pooled database using active geo-replication.

REST API: Manage geo -failover of single and pooled databases


API DESC RIP T IO N

Create or Update Database (createMode=Restore) Creates, updates, or restores a primary or a secondary


database.

Get Create or Update Database Status Returns the status during a create operation.

Set Secondary Database as Primary (Planned Failover) Sets which secondary database is primary by failing over
from the current primary database. This option is not
suppor ted for SQL Managed Instance.
API DESC RIP T IO N

Set Secondary Database as Primary (Unplanned Failover) Sets which secondary database is primary by failing over
from the current primary database. This operation might
result in data loss. This option is not suppor ted for
SQL Managed Instance.

Get Replication Link Gets a specific replication link for a given database in a geo-
replication partnership. It retrieves the information visible in
the sys.geo_replication_links catalog view. This option is
not suppor ted for SQL Managed Instance.

Replication Links - List By Database Gets all replication links for a given database in a geo-
replication partnership. It retrieves the information visible in
the sys.geo_replication_links catalog view.

Delete Replication Link Deletes a database replication link. Cannot be done during
failover.

Next steps
For sample scripts, see:
Configure and failover a single database using active geo-replication.
Configure and failover a pooled database using active geo-replication.
SQL Database also supports auto-failover groups. For more information, see using auto-failover groups.
For a business continuity overview and scenarios, see Business continuity overview.
To learn about Azure SQL Database automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
To learn about authentication requirements for a new primary server and database, see SQL Database
security after disaster recovery.
Auto-failover groups overview & best practices
(Azure SQL Database)
7/12/2022 • 20 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The auto-failover groups feature allows you to manage the replication and failover of some or all databases on a
logical server to another region. This article focuses on using the Auto-failover group feature with Azure SQL
Database and some best practices.
To get started, review Configure auto-failover group. For an end-to-end experience, see the Auto-failover group
tutorial.

NOTE
This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see Auto-failover
groups in Azure SQL Managed Instance.
Auto-failover groups support geo-replication of all databases in the group to only one secondary server in a different
region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for
the same primary replica, use active geo-replication.

Overview
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a
server or all user databases in a managed instance to another Azure region. It is a declarative abstraction on top
of the active geo-replication feature, designed to simplify deployment and management of geo-replicated
databases at scale.
Automatic failover
You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined
policy. The latter option allows you to automatically recover multiple related databases in a secondary region
after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or
SQL Managed Instance availability in the primary region. Typically, these are outages that cannot be
automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include
natural disasters, or incidents caused by a tenant or control ring being down due to an OS kernel memory leak
on compute nodes. For more information, see Azure SQL high availability.
Offload read-only workloads
To reduce traffic to your primary databases, you can also use the secondary databases in a failover group to
offload read-only workloads. Use the read-only listener to direct read-only traffic to a readable secondary
database.
Endpoint redirection
Auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-
failovers. This means you do not have to change the connection string for your application after a geo-failover,
because connections are automatically routed to the current primary. Whether you use manual or automatic
failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the
geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.
For geo-failover RPO and RTO, see Overview of Business Continuity.
Recovering an application
To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering
an application (service) end-to-end after a catastrophic failure requires recovery of all components that
constitute the service and any dependent services. Examples of these components include the client software
(for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all
components are resilient to the same failures and become available within the recovery time objective (RTO) of
your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends.

Terminology and capabilities


Failover group (FOG)
A failover group is a named group of databases managed by a single server that can fail over as a unit to
another Azure region in case all or some primary databases become unavailable due to an outage in the
primary region.

IMPORTANT
The name of the failover group must be globally unique within the .database.windows.net domain.

Ser vers
Some or all of the user databases on a logical server can be placed in a failover group. Also, a server
supports multiple failover groups on a single server.
Primar y
The server that hosts the primary databases in the failover group.
Secondar y
The server that hosts the secondary databases in the failover group. The secondary cannot be in the
same Azure region as the primary.
Adding single databases to failover group
You can put several single databases on the same server into the same failover group. If you add a single
database to the failover group, it automatically creates a secondary database using the same edition and
compute size on secondary server. You specified that server when the failover group was created. If you
add a database that already has a secondary database in the secondary server, that geo-replication link is
inherited by the group. When you add a database that already has a secondary database in a server that
is not part of the failover group, a new secondary is created in the secondary server.

IMPORTANT
Make sure that the secondary server doesn't have a database with the same name unless it is an existing
secondary database.

Adding databases in elastic pool to failover group


You can put all or several databases within an elastic pool into the same failover group. If the primary
database is in an elastic pool, the secondary is automatically created in the elastic pool with the same
name (secondary pool). You must ensure that the secondary server contains an elastic pool with the
same exact name and enough free capacity to host the secondary databases that will be created by the
failover group. If you add a database in the pool that already has a secondary database in the secondary
pool, that geo-replication link is inherited by the group. When you add a database that already has a
secondary database in a server that is not part of the failover group, a new secondary is created in the
secondary pool.
Failover group read-write listener
A DNS CNAME record that points to the current primary. It is created automatically when the failover
group is created and allows the read-write workload to transparently reconnect to the primary when the
primary changes after failover. When the failover group is created on a server, the DNS CNAME record
for the listener URL is formed as <fog-name>.database.windows.net .
Failover group read-only listener
A DNS CNAME record that points to the current secondary. It is created automatically when the failover
group is created and allows the read-only SQL workload to transparently connect to the secondary when
the secondary changes after failover. When the failover group is created on a server, the DNS CNAME
record for the listener URL is formed as <fog-name>.secondary.database.windows.net .
Multiple failover groups
You can configure multiple failover groups for the same pair of servers to control the scope of geo-
failovers. Each group fails over independently. If your tenant-per-database application is deployed in
multiple regions and uses elastic pools, you can use this capability to mix primary and secondary
databases in each pool. This way you may be able to reduce the impact of an outage to only some tenant
databases.
Automatic failover policy
By default, a failover group is configured with an automatic failover policy. The system triggers a geo-
failover after the failure is detected and the grace period has expired. The system must verify that the
outage cannot be mitigated by the built-in high availability infrastructure, for example due to the scale of
the impact. If you want to control the geo-failover workflow from the application or manually, you can
turn off automatic failover policy.

NOTE
Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the
grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless
of their data synchronization state.

Read-only failover policy


By default, the failover of the read-only listener is disabled. It ensures that the performance of the
primary is not impacted when the secondary is offline. However, it also means the read-only sessions will
not be able to connect until the secondary is recovered. If you cannot tolerate downtime for the read-only
sessions and can use the primary for both read-only and read-write traffic at the expense of the potential
performance degradation of the primary, you can enable failover for the read-only listener by configuring
the AllowReadOnlyFailoverToPrimary property. In that case, the read-only traffic will be automatically
redirected to the primary if the secondary is not available.
NOTE
The AllowReadOnlyFailoverToPrimary property only has effect if automatic failover policy is enabled and an
automatic geo-failover has been triggered. In that case, if the property is set to True, the new primary will serve
both read-write and read-only sessions.

Planned failover
Planned failover performs full data synchronization between primary and secondary databases before
the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the
following scenarios:
Perform disaster recovery (DR) drills in production when data loss is not acceptable
Relocate the databases to a different region
Return the databases to the primary region after the outage has been mitigated (failback)

NOTE
During planned failovers or disaster recovery drills, the primary databases and the target secondary geo-replica
databases should have matching service tiers. If a secondary database has lower memory than the primary
database, you may encounter out-of-memory issues, preventing full recovery after failover. If this happens, the
affected geo-secondary database may be put into a limited read-only mode called checkpoint-only mode . To
avoid this, upgrade the service tier of the secondary database to match the primary database during the planned
failover, or drill. Service tier upgrades can be size-of-data operations, and take a while to finish.

Unplanned failover
Unplanned or forced failover immediately switches the secondary to the primary role without waiting for
recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover
is used as a recovery method during outages when the primary is not accessible. When the outage is
mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover
may be executed to fail back, returning the replicas to their original primary and secondary roles.
Manual failover
You can initiate a geo-failover manually at any time regardless of the automatic failover configuration.
During an outage that impacts the primary, if automatic failover policy is not configured, a manual
failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or
friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can
be used to relocate the primary to the secondary region without data loss. When a failover is completed,
the DNS records are automatically updated to ensure connectivity to the new primary.
Grace period with data loss
Because the data is replicated to the secondary database using asynchronous replication, an automatic
geo-failover may result in data loss. You can customize the automatic failover policy to reflect your
application’s tolerance to data loss. By configuring GracePeriodWithDataLossHours , you can control how
long the system waits before initiating a forced failover, which may result in data loss.

Failover group architecture


A failover group in Azure SQL Database can include one or multiple databases, typically used by the same
application. When you are using auto-failover groups with automatic failover policy, an outage that impacts one
or several of the databases in the group will result in an automatic geo-failover.
The auto-failover group must be configured on the primary server and will connect it to the secondary server in
a different Azure region. The groups can include all or some databases in these servers. The following diagram
illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-
failover group.

When designing a service with business continuity in mind, follow the general guidelines and best practices
outlined in this article. When configuring a failover group, ensure that authentication and network access on the
secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary.
For details, see SQL Database security after disaster recovery. For more information about designing solutions
for disaster recovery, see Designing Cloud Solutions for Disaster Recovery Using active geo-replication.
For information about using point-in-time restore with failover groups, see Point in Time Recovery (PITR).

Initial seeding
When adding databases or elastic pools to a failover group, there is an initial seeding phase before data
replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding
completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the
initial seeding to complete depends on the size of your data, number of replicated databases, the load on
primary databases, and the speed of the link between the primary and secondary. Under normal circumstances,
possible seeding speed is up to 500 GB an hour for SQL Database. Seeding is performed for all databases in
parallel.

Use multiple failover groups to failover multiple databases


One or many failover groups can be created between two servers in different regions (primary and secondary
servers). Each group can include one or several databases that are recovered as a unit in case all or some
primary databases become unavailable due to an outage in the primary region. Creating a failover group
creates geo-secondary databases with the same service objective as the primary. If you add an existing geo-
replication relationship to a failover group, make sure the geo-secondary is configured with the same service
tier and compute size as the primary.

Use the read-write listener (primary)


For read-write workloads, use <fog-name>.database.windows.net as the server name in the connection string.
Connections will be automatically directed to the primary. This name does not change after failover. Note the
failover involves updating the DNS record so the client connections are redirected to the new primary only after
the client DNS cache is refreshed. The time to live (TTL) of the primary and secondary listener DNS record is 30
seconds.
Use the read-only listener (secondary)
If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-
secondary. For read-only sessions, use <fog-name>.secondary.database.windows.net as the server name in the
connection string. Connections will be automatically directed to the geo-secondary. It is also recommended that
you indicate read intent in the connection string by using ApplicationIntent=ReadOnly .
In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of read-only replicas
to offload read-only query workloads, using the ApplicationIntent=ReadOnly parameter in the connection string.
When you have configured a geo-secondary, you can use this capability to connect to either a read-only replica
in the primary location or in the geo-replicated location:
To connect to a read-only replica in the secondary location, use ApplicationIntent=ReadOnly and
<fog-name>.secondary.database.windows.net .

Potential performance degradation after failover


A typical Azure application uses multiple Azure services and consists of multiple components. The automatic
geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure
services in the primary region may not be affected by the outage and their components may still be available in
that region. Once the primary databases switch to the secondary (DR) region, the latency between the
dependent components may increase. To avoid the impact of higher latency on the application's performance,
ensure the redundancy of all the application's components in the DR region, follow these network security
guidelines, and orchestrate the geo-failover of relevant application components together with the database.

Potential data loss after failover


If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary.
If the automatic failover policy is configured, the system waits for the period you specified by
GracePeriodWithDataLossHours before initiating an automatic geo-failover. The default value is 1 hour. This favors
database availability over no data loss. Setting GracePeriodWithDataLossHours to a larger number, such as 24
hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database
availability.

IMPORTANT
Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including
longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive
workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for
each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more
extensive data loss in an outage. This lag can be monitored using sys.dm_geo_replication_link_status. If these issues occur,
then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated
databases in the pool.

Failover groups and network security


For some applications the security rules require that the network access to the data tier is restricted to a specific
component or components such as a VM, web service, etc. This requirement presents some challenges for
business continuity design and the use of failover groups. Consider the following options when implementing
such restricted access.
Use failover groups and virtual network service endpoints
If you are using Virtual Network service endpoints and rules to restrict access to your database in SQL Database,
be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not
enable other regions to accept communication from the subnet. Therefore, only the client applications deployed
in the same region can connect to the primary database. Since a geo-failover results in the SQL Database client
sessions being rerouted to a server in a different (secondary) region, these sessions will fail if originated from a
client outside of that region. For that reason, the automatic failover policy cannot be enabled if the participating
servers or instances are included in the Virtual Network rules. To support manual failover, follow these steps:
1. Provision the redundant copies of the front-end components of your application (web service, virtual
machines etc.) in the secondary region.
2. Configure the virtual network rules individually for primary and secondary server.
3. Enable the front-end failover using a Traffic manager configuration.
4. Initiate manual geo-failover when the outage is detected. This option is optimized for the applications that
require consistent latency between the front-end and the data tier and supports recovery when either front
end, data tier or both are impacted by the outage.

NOTE
If you are using the read-only listener to load-balance a read-only workload, make sure that this workload is executed
in a VM or other resource in the secondary region so it can connect to the secondary database.

Use failover groups and firewall rules


If your business continuity plan requires failover using groups with automatic failover, you can restrict access to
your database in SQL Database by using public IP firewall rules. To support automatic failover, follow these
steps:
1. Create a public IP.
2. Create a public load balancer and assign the public IP to it.
3. Create a virtual network and the virtual machines for your front-end components.
4. Create network security group and configure inbound connections.
5. Ensure that the outbound connections are open to Azure SQL Database in a region by using an Sql.<Region>
service tag.
6. Create a SQL Database firewall rule to allow inbound traffic from the public IP address you create in step 1.
For more information on how to configure outbound access and what IP to use in the firewall rules, see Load
balancer outbound connections.
The above configuration will ensure that an automatic geo-failover will not block connections from the front-
end components and assumes that the application can tolerate the longer latency between the front end and the
data tier.

IMPORTANT
To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end
components and databases.

Scale primary database


You can scale up or scale down the primary database to a different compute size (within the same service tier)
without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-
secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary
first, and then scale down the secondary. When you scale a database to a different service tier, this
recommendation is enforced.
This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets
overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem
by making the primary read-only, at the expense of impacting all read-write workloads against the primary.

NOTE
If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-
secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.

Prevent loss of critical data


Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism.
Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical
transactions from data loss, an application developer can call the sp_wait_for_database_copy_sync stored
procedure immediately after committing the transaction. Calling sp_wait_for_database_copy_sync blocks the
calling thread until the last committed transaction has been transmitted and hardened in the transaction log of
the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on
the secondary. sp_wait_for_database_copy_sync is scoped to a specific geo-replication link. Any user with the
connection rights to the primary database can call this procedure.

NOTE
sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee
full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be
significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.

Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Server Contributor role
has all the necessary permissions to manage failover groups.
For specific permission scopes, review how to configure auto-failover groups in Azure SQL Database.

Limitations
Be aware of the following limitations:
Failover groups cannot be created between two servers in the same Azure region.
Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.
Database rename is not supported for databases in failover group. You will need to temporarily delete
failover group to be able to rename a database, or remove the database from the failover group.

Programmatically manage failover groups


As discussed previously, auto-failover groups can also be managed programmatically using Azure PowerShell,
Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication
includes a set of Azure Resource Manager APIs for management, including the Azure SQL Database REST API
and Azure PowerShell cmdlets. These APIs require the use of resource groups and support Azure role-based
access control (Azure RBAC). For more information on how to implement access roles, see Azure role-based
access control (Azure RBAC).
PowerShell
Azure CLI
REST API

C M DL ET DESC RIP T IO N

New-AzSqlDatabaseFailoverGroup This command creates a failover group and registers it on


both primary and secondary servers

Remove-AzSqlDatabaseFailoverGroup Removes a failover group from the server

Get-AzSqlDatabaseFailoverGroup Retrieves a failover group's configuration

Set-AzSqlDatabaseFailoverGroup Modifies configuration of a failover group

Switch-AzSqlDatabaseFailoverGroup Triggers failover of a failover group to the secondary server

Add-AzSqlDatabaseToFailoverGroup Adds one or more databases to a failover group

Next steps
For detailed tutorials, see
Add SQL Database to a failover group
Add an elastic pool to a failover group
For sample scripts, see:
Use PowerShell to configure active geo-replication for Azure SQL Database
Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database
Use PowerShell to add an Azure SQL Database to a failover group
For a business continuity overview and scenarios, see Business continuity overview
To learn about Azure SQL Database automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
To learn about authentication requirements for a new primary server and database, see SQL Database
security after disaster recovery.
Restore your Azure SQL Database or failover to a
secondary
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database offers the following capabilities for recovering from an outage:
Active geo-replication
Auto-failover groups
Geo-restore
Zone-redundant databases
To learn about business continuity scenarios and the features supporting these scenarios, see Business
continuity.

NOTE
If you are using zone-redundant Premium or Business Critical databases or pools, the recovery process is automated and
the rest of this material does not apply.
Both primary and secondary databases are required to have the same service tier. It is also strongly recommended that
the secondary database is created with the same compute size (DTUs or vCores) as the primary. For more information,
see Upgrading or downgrading as primary database.
Use one or several failover groups to manage failover of multiple databases. If you add an existing geo-replication
relationship to the failover group, make sure the geo-secondary is configured with the same service tier and compute size
as the primary. For more information, see Use auto-failover groups to enable transparent and coordinated failover of
multiple databases.

Prepare for the event of an outage


For success with recovery to another data region using either failover groups or geo-redundant backups, you
need to prepare a server in another data center outage to become the new primary server should the need arise
as well as have well-defined steps documented and tested to ensure a smooth recovery. These preparation steps
include:
Identify the server in another region to become the new primary server. For geo-restore, this is generally a
server in the paired region for the region in which your database is located. This eliminates the additional
traffic cost during the geo-restoring operations.
Identify, and optionally define, the server-level IP firewall rules needed on for users to access the new
primary database.
Determine how you are going to redirect users to the new primary server, such as by changing connection
strings or by changing DNS entries.
Identify, and optionally create, the logins that must be present in the master database on the new primary
server, and ensure these logins have appropriate permissions in the master database, if any. For more
information, see SQL Database security after disaster recovery
Identify alert rules that need to be updated to map to the new primary database.
Document the auditing configuration on the current primary database
Perform a disaster recovery drill. To simulate an outage for geo-restore, you can delete or rename the source
database to cause application connectivity failure. To simulate an outage using failover groups, you can
disable the web application or virtual machine connected to the database or failover the database to cause
application connectivity failures.

When to initiate recovery


The recovery operation impacts the application. It requires changing the SQL connection string or redirection
using DNS and could result in permanent data loss. Therefore, it should be done only when the outage is likely
to last longer than your application's recovery time objective. When the application is deployed to production
you should perform regular monitoring of the application health and use the following data points to assert that
the recovery is warranted:
1. Permanent connectivity failure from the application tier to the database.
2. The Azure portal shows an alert about an incident in the region with broad impact.

NOTE
If you are using failover groups and chose automatic failover, the recovery process is automated and transparent to the
application.

Depending on your application tolerance to downtime and possible business liability you can consider the
following recovery options.
Use the Get Recoverable Database (LastAvailableBackupDate) to get the latest Geo-replicated restore point.

Wait for service recovery


The Azure teams work diligently to restore service availability as quickly as possible but depending on the root
cause it can take hours or days. If your application can tolerate significant downtime you can simply wait for the
recovery to complete. In this case, no action on your part is required. You can see the current service status on
our Azure Service Health Dashboard. After the recovery of the region, your application's availability is restored.

Fail over to geo-replicated secondary server in the failover group


If your application's downtime can result in business liability, you should be using failover groups. It enables the
application to quickly restore availability in a different region in case of an outage. For a tutorial, see Implement
a geo-distributed database.
To restore availability of the database(s) you need to initiate the failover to the secondary server using one of
the supported methods.
Use one of the following guides to fail over to a geo-replicated secondary database:
Fail over to a geo-replicated secondary server using the Azure portal
Fail over to the secondary server using PowerShell
Fail over to a secondary server using Transact-SQL (T-SQL)

Recover using geo-restore


If your application's downtime does not result in business liability you can use geo-restore as a method to
recover your application database(s). It creates a copy of the database from its latest geo-redundant backup.

Configure your database after recovery


If you are using geo-restore to recover from an outage, you must make sure that the connectivity to the new
databases is properly configured so that the normal application function can be resumed. This is a checklist of
tasks to get your recovered database production ready.
Update connection strings
Because your recovered database resides in a different server, you need to update your application's connection
string to point to that server.
For more information about changing connection strings, see the appropriate development language for your
connection library.
Configure firewall rules
You need to make sure that the firewall rules configured on server and on the database match those that were
configured on the primary server and primary database. For more information, see How to: Configure Firewall
Settings (Azure SQL Database).
Configure logins and database users
You need to make sure that all the logins used by your application exist on the server which is hosting your
recovered database. For more information, see Security Configuration for geo-replication.

NOTE
You should configure and test your server firewall rules and logins (and their permissions) during a disaster recovery drill.
These server-level objects and their configuration may not be available during the outage.

Setup telemetry alerts


You need to make sure your existing alert rule settings are updated to map to the recovered database and the
different server.
For more information about database alert rules, see Receive Alert Notifications and Track Service Health.
Enable auditing
If auditing is required to access your database, you need to enable Auditing after the database recovery. For
more information, see Database auditing.

Next steps
To learn about Azure SQL Database automated backups, see SQL Database automated backups
To learn about business continuity design and recovery scenarios, see Continuity scenarios
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups
Performing disaster recovery drills
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


It is recommended that validation of application readiness for recovery workflow is performed periodically.
Verifying the application behavior and implications of data loss and/or the disruption that failover involves is a
good engineering practice. It is also a requirement by most industry standards as part of business continuity
certification.
Performing a disaster recovery drill consists of:
Simulating data tier outage
Recovering
Validate application integrity post recovery
Depending on how you designed your application for business continuity, the workflow to execute the drill can
vary. This article describes the best practices for conducting a disaster recovery drill in the context of Azure SQL
Database.

Geo-restore
To prevent the potential data loss when conducting a disaster recovery drill, perform the drill using a test
environment by creating a copy of the production environment and using it to verify the application’s failover
workflow.
Outage simulation
To simulate the outage, you can rename the source database. This name change causes application connectivity
failures.
Recovery
Perform the geo-restore of the database into a different server as described here.
Change the application configuration to connect to the recovered database and follow the Configure a
database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connection strings, logins, basic
functionality testing, or other validations part of standard application signoffs procedures).

Failover groups
For a database that is protected using failover groups, the drill exercise involves planned failover to the
secondary server. The planned failover ensures that the primary and the secondary databases in the failover
group remain in sync when the roles are switched. Unlike the unplanned failover, this operation does not result
in data loss, so the drill can be performed in the production environment.
Outage simulation
To simulate the outage, you can disable the web application or virtual machine connected to the database. This
outage simulation results in the connectivity failures for the web clients.
Recovery
Make sure the application configuration in the DR region points to the former secondary, which becomes the
fully accessible new primary.
Initiate planned failover of the failover group from the secondary server.
Follow the Configure a database after recovery guide to complete the recovery.
Validation
Complete the drill by verifying the application integrity post recovery (including connectivity, basic functionality
testing, or other validations required for the drill signoffs).

Next steps
To learn about business continuity scenarios, see Continuity scenarios.
To learn about Azure SQL Database automated backups, see SQL Database automated backups
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups.
To learn about faster recovery options, see Active geo-replication and Auto-failover groups.
What is SQL Data Sync for Azure?
7/12/2022 • 14 minutes to read • Edit Online

SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-
directionally across multiple databases, both on-premises and in the cloud.

IMPORTANT
Azure SQL Data Sync does not support Azure SQL Managed Instance at this time.

Overview
Data Sync is based around the concept of a sync group. A sync group is a group of databases that you want to
synchronize.
Data Sync uses a hub and spoke topology to synchronize data. You define one of the databases in the sync
group as the hub database. The rest of the databases are member databases. Sync occurs only between the hub
and individual members.
The Hub Database must be an Azure SQL Database.
The member databases can be either databases in Azure SQL Database or in instances of SQL Server.
The Sync Metadata Database contains the metadata and log for Data Sync. The Sync Metadata Database
has to be an Azure SQL Database located in the same region as the Hub Database. The Sync Metadata
Database is customer created and customer owned. You can only have one Sync Metadata Database per
region and subscription. Sync Metadata Database cannot be deleted or renamed while sync groups or sync
agents exist. Microsoft recommends to create a new, empty database for use as the Sync Metadata Database.
Data Sync creates tables in this database and runs a frequent workload.

NOTE
If you're using an on premises database as a member database, you have to install and configure a local sync agent.

A sync group has the following properties:


The Sync Schema describes which data is being synchronized.
The Sync Direction can be bi-directional or can flow in only one direction. That is, the Sync Direction can be
Hub to Member, or Member to Hub, or both.
The Sync Inter val describes how often synchronization occurs.
The Conflict Resolution Policy is a group level policy, which can be Hub wins or Member wins.

When to use
Data Sync is useful in cases where data needs to be kept updated across several databases in Azure SQL
Database or SQL Server. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data synchronized between your databases
in SQL Server and Azure SQL Database to enable hybrid applications. This capability may appeal to
customers who are considering moving to the cloud and would like to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate different workloads across different
databases. For example, if you have a large production database, but you also need to run a reporting or
analytics workload on this data, it's helpful to have a second database for this additional workload. This
approach minimizes the performance impact on your production workload. You can use Data Sync to keep
these two databases synchronized.
Globally Distributed Applications: Many businesses span several regions and even several
countries/regions. To minimize network latency, it's best to have your data in a region close to you. With Data
Sync, you can easily keep databases in regions around the world synchronized.
Data Sync isn't the preferred solution for the following scenarios:

SC EN A RIO SO M E REC O M M EN DED SO L UT IO N S

Disaster Recovery Azure geo-redundant backups

Read Scale Use read-only replicas to load balance read-only query


workloads

ETL (OLTP to OLAP) Azure Data Factory or SQL Server Integration Services

Migration from SQL Server to Azure SQL Database. Azure Database Migration Service
However, SQL Data Sync can be used after the migration is
completed, to ensure that the source and target are kept in
sync.

How it works
Tracking data changes: Data Sync tracks changes using insert, update, and delete triggers. The changes are
recorded in a side table in the user database. Note that BULK INSERT doesn't fire triggers by default. If
FIRE_TRIGGERS isn't specified, no insert triggers execute. Add the FIRE_TRIGGERS option so Data Sync can
track those inserts.
Synchronizing data: Data Sync is designed in a hub and spoke model. The hub syncs with each member
individually. Changes from the hub are downloaded to the member and then changes from the member are
uploaded to the hub.
Resolving conflicts: Data Sync provides two options for conflict resolution, Hub wins or Member wins.
If you select Hub wins, the changes in the hub always overwrite changes in the member.
If you select Member wins, the changes in the member overwrite changes in the hub. If there's more
than one member, the final value depends on which member syncs first.

Compare with Transactional Replication


DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N

Advantages - Active-active support - Lower latency


- Bi-directional between on-premises - Transactional consistency
and Azure SQL Database - Reuse existing topology after
migration
-Azure SQL Managed Instance
support

Disadvantages - No transactional consistency - Can't publish from Azure SQL


- Higher performance impact Database
- High maintenance cost

Private link for Data Sync


NOTE
The SQL Data Sync private link is different from the Azure Private Link.

The new private link feature allows you to choose a service managed private endpoint to establish a secure
connection between the sync service and your member/hub databases during the data synchronization process.
A service managed private endpoint is a private IP address within a specific virtual network and subnet. Within
Data Sync, the service managed private endpoint is created by Microsoft and is exclusively used by the Data
Sync service for a given sync operation. Before setting up the private link, read the general requirements for the
feature.

NOTE
You must manually approve the service managed private endpoint in the Private endpoint connections page of the
Azure portal during the sync group deployment or by using PowerShell.

Get started
Set up Data Sync in the Azure portal
Set up Azure SQL Data Sync
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Set up Data Sync with PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a databases in a SQL Server instance
Set up Data Sync with REST API
Use REST API to sync between multiple databases in Azure SQL Database
Review the best practices for Data Sync
Best practices for Azure SQL Data Sync
Did something go wrong
Troubleshoot issues with Azure SQL Data Sync

Consistency and performance


Eventual consistency
Since Data Sync is trigger-based, transactional consistency isn't guaranteed. Microsoft guarantees that all
changes are made eventually and that Data Sync doesn't cause data loss.
Performance impact
Data Sync uses insert, update, and delete triggers to track changes. It creates side tables in the user database for
change tracking. These change tracking activities have an impact on your database workload. Assess your
service tier and upgrade if needed.
Provisioning and deprovisioning during sync group creation, update, and deletion may also impact the database
performance.

Requirements and limitations


General requirements
Each table must have a primary key. Don't change the value of the primary key in any row. If you have to
change a primary key value, delete the row and recreate it with the new primary key value.

IMPORTANT
Changing the value of an existing primary key will result in the following faulty behavior:
Data between hub and member can be lost even though sync does not report any issue.
Sync can fail because the tracking table has a non-existing row from source due to the primary key change.

Snapshot isolation must be enabled for both Sync members and hub. For more info, see Snapshot
Isolation in SQL Server.
In order to use Data Sync private link, both the member and hub databases must be hosted in Azure
(same or different regions), in the same cloud type (e.g. both in public cloud or both in government
cloud). Additionally, to use private link, Microsoft.Network resource providers must be Registered for the
subscriptions that host the hub and member servers. Lastly, you must manually approve the private link
for Data Sync during the sync configuration, within the “Private endpoint connections” section in the
Azure portal or through PowerShell. For more details on how to approve the private link, see Set up SQL
Data Sync. Once you approve the service managed private endpoint, all communication between the sync
service and the member/hub databases will happen over the private link. Existing sync groups can be
updated to have this feature enabled.
General limitations
A table can't have an identity column that isn't the primary key.
A primary key can't have the following data types: sql_variant, binary, varbinary, image, xml.
Be cautious when you use the following data types as a primary key, because the supported precision is only
to the second: time, datetime, datetime2, datetimeoffset.
The names of objects (databases, tables, and columns) can't contain the printable characters period (.), left
square bracket ([), or right square bracket (]).
A table name can't contain printable characters: ! " # $ % ' ( ) * + - space
Azure Active Directory authentication isn't supported.
If there are tables with the same name but different schema (for example, dbo.customers and
sales.customers) only one of the tables can be added into sync.
Columns with User-Defined Data Types aren't supported
Moving servers between different subscriptions isn't supported.
If two primary keys are only different in case (e.g. Foo and foo), Data Sync won't support this scenario.
Truncating tables is not an operation supported by Data Sync (changes won't be tracked).
Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale
database can be a member database in a Data Sync topology.
Memory-optimized tables are not supported.
Unsupported data types
FileStream
SQL/CLR UDT
XMLSchemaCollection (XML supported)
Cursor, RowVersion, Timestamp, Hierarchyid
Unsupported column types
Data Sync can't sync read-only or system-generated columns. For example:
Computed columns.
System-generated columns for temporal tables.
Limitations on service and database dimensions

DIM EN SIO N S L IM IT W O RK A RO UN D

Maximum number of sync groups any 5


database can belong to.

Maximum number of endpoints in a 30


single sync group

Maximum number of on-premises 5 Create multiple sync groups


endpoints in a single sync group.

Database, table, schema, and column 50 characters per name


names

Tables in a sync group 500 Create multiple sync groups

Columns in a table in a sync group 1000

Data row size on a table 24 Mb


NOTE
There may be up to 30 endpoints in a single sync group if there is only one sync group. If there is more than one sync
group, the total number of endpoints across all sync groups cannot exceed 30. If a database belongs to multiple sync
groups, it is counted as multiple endpoints, not one.

Network requirements

NOTE
If you use Sync private link, these network requirements do not apply.

When the sync group is established, the Data Sync service needs to connect to the hub database. At the time
when you establish the sync group, the Azure SQL server must have the following configuration in its
Firewalls and virtual networks settings:

Deny public network access must be set to Off.


Allow Azure services and resources to access this server must be set to Yes, or you must create IP rules for
the IP addresses used by Data Sync service.
Once the sync group is created and provisioned, you can then disable these settings. The sync agent will connect
directly to the hub database, and you can use the server's firewall IP rules or private endpoints to allow the
agent to access the hub server.

NOTE
If you change the sync group's schema settings, you will need to allow the Data Sync service to access the server again so
that the hub database can be re-provisioned.

Region data residency


If you synchronize data within the same region, SQL Data Sync doesn't store/process customer data outside that
region in which the service instance is deployed. If you synchronize data across different regions, SQL Data Sync
will replicate customer data to the paired regions.

FAQ about SQL Data Sync


How much does the SQL Data Sync service cost
There's no charge for the SQL Data Sync service itself. However, you still collect data transfer charges for data
movement in and out of your SQL Database instance. For more information, see data transfer charges.
What regions support Data Sync
SQL Data Sync is available in all regions.
Is a SQL Database account required
Yes. You must have a SQL Database account to host the hub database.
Can I use Data Sync to sync between SQL Server databases only
Not directly. You can sync between SQL Server databases indirectly, however, by creating a Hub database in
Azure, and then adding the on-premises databases to the sync group.
Can I configure Data Sync to sync between databases in Azure SQL Database that belong to different
subscriptions
Yes. You can configure sync between databases that belong to resource groups owned by different subscriptions,
even if the subscriptions belong to different tenants.
If the subscriptions belong to the same tenant and you have permission to all subscriptions, you can
configure the sync group in the Azure portal.
Otherwise, you have to use PowerShell to add the sync members.
Can I setup Data Sync to sync between databases in SQL Database that belong to different clouds (like Azure
Public Cloud and Azure China 21Vianet)
Yes. You can setup sync between databases that belong to different clouds. You have to use PowerShell to add
the sync members that belong to the different subscriptions.
Can I use Data Sync to seed data from my production database to an empty database, and then sync them
Yes. Create the schema manually in the new database by scripting it from the original. After you create the
schema, add the tables to a sync group to copy the data and keep it synced.
Should I use SQL Data Sync to back up and restore my databases
It isn't recommended to use SQL Data Sync to create a backup of your data. You can't back up and restore to a
specific point in time because SQL Data Sync synchronizations aren't versioned. Furthermore, SQL Data Sync
doesn't back up other SQL objects, such as stored procedures, and doesn't do the equivalent of a restore
operation quickly.
For one recommended backup technique, see Copy a database in Azure SQL Database.
Can Data Sync sync encrypted tables and columns
If a database uses Always Encrypted, you can sync only the tables and columns that are not encrypted. You
can't sync the encrypted columns, because Data Sync can't decrypt the data.
If a column uses Column-Level Encryption (CLE), you can sync the column, as long as the row size is less than
the maximum size of 24 Mb. Data Sync treats the column encrypted by key (CLE) as normal binary data. To
decrypt the data on other sync members, you need to have the same certificate.
Is collation supported in SQL Data Sync
Yes. SQL Data Sync supports collation in the following scenarios:
If the selected sync schema tables aren't already in your hub or member databases, then when you deploy
the sync group, the service automatically creates the corresponding tables and columns with the collation
settings selected in the empty destination databases.
If the tables to be synced already exist in both your hub and member databases, SQL Data Sync requires that
the primary key columns have the same collation between hub and member databases to successfully
deploy the sync group. There are no collation restrictions on columns other than the primary key columns.
Is federation supported in SQL Data Sync
Federation Root Database can be used in the SQL Data Sync Service without any limitation. You can't add the
Federated Database endpoint to the current version of SQL Data Sync.
Can I use Data Sync to sync data exported from Dynamics 365 using bring your own database (BYOD)
feature?
The Dynamics 365 bring your own database feature lets administrators export data entities from the application
into their own Microsoft Azure SQL database. Data Sync can be used to sync this data into other databases if
data is exported using incremental push (full push is not supported) and enable triggers in target
database is set to yes .
How do I create Data Sync in Failover group to support Disaster Recovery?
To ensure data sync operations in failover region are at par with Primary region, after failover you have to
manually re-create the Sync Group in failover region with same settings as primary region.
Next steps
Update the schema of a synced database
Do you have to update the schema of a database in a sync group? Schema changes aren't automatically
replicated. For some solutions, see the following articles:
Automate the replication of schema changes with SQL Data Sync in Azure
Use PowerShell to update the sync schema in an existing sync group
Monitor and troubleshoot
Is SQL Data Sync doing as expected? To monitor activity and troubleshoot issues, see the following articles:
Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot issues with Azure SQL Data Sync
Learn more about Azure SQL Database
For more info about Azure SQL Database, see the following articles:
SQL Database Overview
Database Lifecycle Management
Data Sync Agent for SQL Data Sync
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Sync data with SQL Server databases by installing and configuring the Data Sync Agent for SQL Data Sync in
Azure. For more info about SQL Data Sync, see Sync data across multiple cloud and on-premises databases with
SQL Data Sync.

IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance at this time.

Download and install


To download the Data Sync Agent, go to SQL Data Sync Agent. To upgrade the Data Sync Agent, install the Agent
in the same location as the old Agent and it will override the original one.
Install silently
To install the Data Sync Agent silently from the command prompt, enter a command similar to the following
example. Check the file name of the downloaded .msi file, and provide your own values for the TARGETDIR and
SERVICEACCOUNT arguments.
If you don't provide a value for TARGETDIR , the default value is
C:\Program Files (x86)\Microsoft SQL Data Sync 2.0 .

If you provide LocalSystem as the value of SERVICEACCOUNT , use SQL Server authentication when
you configure the agent to connect to SQL Server.
If you provide a domain user account or a local user account as the value of SERVICEACCOUNT , you
also have to provide the password with the SERVICEPASSWORD argument. For example,
SERVICEACCOUNT="<domain>\<user>" SERVICEPASSWORD="<password>" .

msiexec /i "SQLDataSyncAgent-2.0-x86-ENU.msi" TARGETDIR="C:\Program Files (x86)\Microsoft SQL Data Sync 2.0"


SERVICEACCOUNT="LocalSystem" /qn

Sync data with a SQL Server database


To configure the Data Sync Agent so you can sync data with one or more SQL Server databases, see Add a SQL
Server database.

Data Sync Agent FAQ


Why do I need a client agent
The SQL Data Sync service communicates with SQL Server databases via the client agent. This security feature
prevents direct communication with databases behind a firewall. When the SQL Data Sync service
communicates with the agent, it does so using encrypted connections and a unique token or agent key. The SQL
Server databases authenticate the agent using the connection string and agent key. This design provides a high
level of security for your data.
How many instances of the local agent UI can be run
Only one instance of the UI can be run.
How can I change my service account
After you install a client agent, the only way to change the service account is to uninstall it and install a new
client agent with the new service account.
How do I change my agent key
An agent key can only be used once by an agent. It cannot be reused when you remove then reinstall a new
agent, nor can it be used by multiple agents. If you need to create a new key for an existing agent, you must be
sure that the same key is recorded with the client agent and with the SQL Data Sync service.
How do I retire a client agent
To immediately invalidate or retire an agent, regenerate its key in the portal but do not submit it in the Agent UI.
Regenerating a key invalidates the previous key irrespective if the corresponding agent is online or offline.
How do I move a client agent to another computer
If you want to run the local agent from a different computer than it is currently on, do the following things:
1. Install the agent on desired computer.
2. Log in to the SQL Data Sync portal and regenerate an agent key for the new agent.
3. Use the new agent's UI to submit the new agent key.
4. Wait while the client agent downloads the list of on-premises databases that were registered earlier.
5. Provide database credentials for all databases that display as unreachable. These databases must be
reachable from the new computer on which the agent is installed.
How do I delete the Sync metadata database if the Sync agent is still associated with it
In order to delete a Sync metadata database that has a Sync agent associated with it, you must first delete the
Sync agent. To delete the agent, do the following things:
1. Select the Sync database.
2. Go to the Sync to other databases page.
3. Select the Sync agent and click on Delete .

Troubleshoot Data Sync Agent issues


The client agent install, uninstall, or repair fails
The client agent doesn't work after I cancel the uninstall
My database isn't listed in the agent list
Client agent doesn't start (Error 1069)
I can't submit the agent key
The client agent can't be deleted from the portal if its associated on-premises database is unreachable
Local Sync Agent app can't connect to the local sync service
The client agent install, uninstall, or repair fails
Cause . Many scenarios might cause this failure. To determine the specific cause for this failure, look at
the logs.
Resolution . To find the specific cause of the failure, generate and look at the Windows Installer logs. You
can turn on logging at a command prompt. For example, if the downloaded installation file is
SQLDataSyncAgent-2.0-x86-ENU.msi , generate and examine log files by using the following command lines:
For installs: msiexec.exe /i SQLDataSyncAgent-2.0-x86-ENU.msi /l*v LocalAgentSetup.Log

For uninstalls: msiexec.exe /x SQLDataSyncAgent-2.0-x86-ENU.msi /l*v LocalAgentSetup.Log

You can also turn on logging for all installations that are performed by Windows Installer. The
Microsoft Knowledge Base article How to enable Windows Installer logging provides a one-click
solution to turn on logging for Windows Installer. It also provides the location of the logs.
The client agent doesn't work after I cancel the uninstall
The client agent doesn't work, even after you cancel its uninstallation.
Cause . This occurs because the SQL Data Sync client agent doesn't store credentials.
Resolution . You can try these two solutions:
Use services.msc to reenter the credentials for the client agent.
Uninstall this client agent and then install a new one. Download and install the latest client agent from
Download Center.
My database isn't listed in the agent list
When you attempt to add an existing SQL Server database to a sync group, the database doesn't appear in the
list of agents.
These scenarios might cause this issue:
Cause . The client agent and sync group are in different datacenters.
Resolution . The client agent and the sync group must be in the same datacenter. To set this up, you have
two options:
Create a new agent in the datacenter where the sync group is located. Then, register the database with
that agent.
Delete the current sync group. Then, re-create the sync group in the datacenter where the agent is
located.
Cause . The client agent's list of databases isn't current.
Resolution . Stop and then restart the client agent service.
The local agent downloads the list of associated databases only on the first submission of the agent key. It
doesn't download the list of associated databases on subsequent agent key submissions. Databases that
are registered during an agent move don't show up in the original agent instance.
Client agent doesn't start (Error 1069)
You discover that the agent isn't running on a computer that hosts SQL Server. When you attempt to manually
start the agent, you see a dialog box that displays the message, "Error 1069: The service did not start due to a
logon failure."

Cause . A likely cause of this error is that the password on the local server has changed since you created
the agent and agent password.
Resolution . Update the agent's password to your current server password:
1. Locate the SQL Data Sync client agent service.
a. Select Star t .
b. In the search box, enter ser vices.msc .
c. In the search results, select Ser vices .
d. In the Ser vices window, scroll to the entry for SQL Data Sync Agent .
2. Right-click SQL Data Sync Agent , and then select Stop .
3. Right-click SQL Data Sync Agent , and then select Proper ties .
4. On SQL Data Sync Agent Proper ties , select the Log in tab.
5. In the Password box, enter your password.
6. In the Confirm Password box, reenter your password.
7. Select Apply , and then select OK .
8. In the Ser vices window, right-click the SQL Data Sync Agent service, and then click Star t .
9. Close the Ser vices window.
I can't submit the agent key
After you create or re-create a key for an agent, you try to submit the key through the SqlAzureDataSyncAgent
application. The submission fails to complete.

Prerequisites . Before you proceed, check the following prerequisites:


The SQL Data Sync Windows service is running.
The service account for SQL Data Sync Windows service has network access.
The outbound 1433 port is open in your local firewall rule.
The local ip is added to the server or database firewall rule for the sync metadata database.
Cause . The agent key uniquely identifies each local agent. The key must meet two conditions:
The client agent key on the SQL Data Sync server and the local computer must be identical.
The client agent key can be used only once.
Resolution . If your agent isn't working, it's because one or both of these conditions are not met. To get
your agent to work again:
1. Generate a new key.
2. Apply the new key to the agent.
To apply the new key to the agent:
1. In File Explorer, go to your agent installation directory. The default installation directory is C:\Program
Files (x86)\Microsoft SQL Data Sync.
2. Double-click the bin subdirectory.
3. Open the SqlAzureDataSyncAgent application.
4. Select Submit Agent Key .
5. In the space provided, paste the key from your clipboard.
6. Select OK .
7. Close the program.
The client agent can't be deleted from the portal if its associated on-premises database is unreachable
If a local endpoint (that is, a database) that is registered with a SQL Data Sync client agent becomes unreachable,
the client agent can't be deleted.
Cause . The local agent can't be deleted because the unreachable database is still registered with the
agent. When you try to delete the agent, the deletion process tries to reach the database, which fails.
Resolution . Use "force delete" to delete the unreachable database.

NOTE
If sync metadata tables remain after a "force delete", use deprovisioningutil.exe to clean them up.

Local Sync Agent app can't connect to the local sync service
Resolution . Try the following steps:
1. Exit the app.
2. Open the Component Services Panel.
a. In the search box on the taskbar, enter ser vices.msc .
b. In the search results, double-click Ser vices .
3. Stop the SQL Data Sync service.
4. Restart the SQL Data Sync service.
5. Reopen the app.

Run the Data Sync Agent from the command prompt


You can run the following Data Sync Agent commands from the command prompt:
Ping the service
Usage

SqlDataSyncAgentCommand.exe -action pingsyncservice

Example

SqlDataSyncAgentCommand.exe -action "pingsyncservice"

Display registered databases


Usage

SqlDataSyncAgentCommand.exe -action displayregistereddatabases

Example

SqlDataSyncAgentCommand.exe -action "displayregistereddatabases"

Submit the agent key


Usage
Usage: SqlDataSyncAgentCommand.exe -action submitagentkey -agentkey [agent key] -username [user name] -
password [password]

Example

SqlDataSyncAgentCommand.exe -action submitagentkey -agentkey [agent key generated from portal, PowerShell,
or API] -username [user name to sync metadata database] -password [user name to sync metadata database]

Register a database
Usage

SqlDataSyncAgentCommand.exe -action registerdatabase -servername [on-premisesdatabase server name] -


databasename [on-premisesdatabase name] -username [domain\\username] -password [password] -authentication
[sql or windows] -encryption [true or false]

Examples

SqlDataSyncAgentCommand.exe -action "registerdatabase" -serverName localhost -databaseName testdb -


authentication sql -username <user name> -password <password> -encryption true

SqlDataSyncAgentCommand.exe -action "registerdatabase" -serverName localhost -databaseName testdb -


authentication windows -encryption true

Unregister a database
When you use this command to unregister a database, it deprovisions the database completely. If the database
participates in other sync groups, this operation breaks the other sync groups.
Usage

SqlDataSyncAgentCommand.exe -action unregisterdatabase -servername [on-premisesdatabase server name] -


databasename [on-premisesdatabase name]

Example

SqlDataSyncAgentCommand.exe -action "unregisterdatabase" -serverName localhost -databaseName testdb

Update credentials
Usage

SqlDataSyncAgentCommand.exe -action updatecredential -servername [on-premisesdatabase server name] -


databasename [on-premisesdatabase name] -username [domain\\username] -password [password] -authentication
[sql or windows] -encryption [true or false]

Examples

SqlDataSyncAgentCommand.exe -action "updatecredential" -serverName localhost -databaseName testdb -


authentication sql -username <user name> -password <password> -encryption true

SqlDataSyncAgentCommand.exe -action "updatecredential" -serverName localhost -databaseName testdb -


authentication windows -encryption true

Next steps
For more info about SQL Data Sync, see the following articles:
Overview - Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Best practices - Best practices for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - [Troubleshoot issues with Azure SQL Data Sync]sql-data-sync-troubleshoot.md)
Update the sync schema
With Transact-SQL - Automate replication of schema changes with SQL Data Sync in Azure
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
Best practices for Azure SQL Data Sync
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes best practices for Azure SQL Data Sync.
For an overview of SQL Data Sync, see Sync data across multiple cloud and on-premises databases with Azure
SQL Data Sync.

IMPORTANT
Azure SQL Data Sync does not support Azure SQL Managed Instance at this time.

Security and reliability


Client agent
Install the client agent by using the least privileged user account that has network service access.
Install the client agent on a computer that isn't the SQL Server computer.
Don't register an on-premises database with more than one agent.
Avoid this even if you are syncing different tables for different sync groups.
Registering an on-premises database with multiple client agents poses challenges when you delete
one of the sync groups.
Database accounts with least required privileges
For sync setup . Create/Alter Table; Alter Database; Create Procedure; Select/ Alter Schema; Create User-
Defined Type.
For ongoing sync . Select/ Insert/ Update/ Delete on tables that are selected for syncing, and on sync
metadata and tracking tables; Execute permission on stored procedures created by the service; Execute
permission on user-defined table types.
For deprovisioning . Alter on tables part of sync; Select/ Delete on sync metadata tables; Control on
sync tracking tables, stored procedures, and user-defined types.
Azure SQL Database supports only a single set of credentials. To accomplish these tasks within this constraint,
consider the following options:
Change the credentials for different phases (for example, credentials1 for setup and credentials2 for
ongoing).
Change the permission of the credentials (that is, change the permission after sync is set up).
Auditing
It is recommended to enable auditing at the level of the databases in the sync groups. Learn how to enable
auditing on your Azure SQL database or enable auditing on your SQL Server database.

Setup
Database considerations and constraints
Database size
When you create a new database, set the maximum size so that it's always larger than the database you deploy.
If you don't set the maximum size to larger than the deployed database, sync fails. Although SQL Data Sync
doesn't offer automatic growth, you can run the ALTER DATABASE command to increase the size of the database
after it has been created. Ensure that you stay within the database size limits.

IMPORTANT
SQL Data Sync stores additional metadata with each database. Ensure that you account for this metadata when you
calculate space needed. The amount of added overhead is related to the width of the tables (for example, narrow tables
require more overhead) and the amount of traffic.

Table considerations and constraints


Selecting tables
You don't have to include all the tables that are in a database in a sync group. The tables that you include in a
sync group affect efficiency and costs. Include tables, and the tables they are dependent on, in a sync group only
if business needs require it.
Primary keys
Each table in a sync group must have a primary key. SQL Data Sync can't sync a table that doesn't have a
primary key.
Before using SQL Data Sync in production, test initial and ongoing sync performance.
Empty tables provide the best performance
Empty tables provide the best performance at initialization time. If the target table is empty, Data Sync uses bulk
insert to load the data. Otherwise, Data Sync does a row-by-row comparison and insertion to check for conflicts.
If performance is not a concern, however, you can set up sync between tables that already contain data.
Provisioning destination databases
SQL Data Sync provides basic database autoprovisioning.
This section discusses the limitations of provisioning in SQL Data Sync.
Autoprovisioning limitations
SQL Data Sync has the following limitations for autoprovisioning:
Select only the columns that are created in the destination table. Any columns that aren't part of the sync
group aren't provisioned in the destination tables.
Indexes are created only for selected columns. If the source table index has columns that aren't part of the
sync group, those indexes aren't provisioned in the destination tables.
Indexes on XML type columns aren't provisioned.
CHECK constraints aren't provisioned.
Existing triggers on the source tables aren't provisioned.
Views and stored procedures aren't created on the destination database.
ON UPDATE CASCADE and ON DELETE CASCADE actions on foreign key constraints aren't recreated in the
destination tables.
If you have decimal or numeric columns with a precision greater than 28, SQL Data Sync may encounter a
conversion overflow issue during sync. We recommend that you limit the precision of decimal or numeric
columns to 28 or less.
Recommendations
Use the SQL Data Sync autoprovisioning capability only when you are trying out the service.
For production, provision the database schema.
Where to locate the hub database
Enterprise-to-cloud scenario
To minimize latency, keep the hub database close to the greatest concentration of the sync group's database
traffic.
Cloud-to-cloud scenario
When all the databases in a sync group are in one datacenter, the hub should be located in the same
datacenter. This configuration reduces latency and the cost of data transfer between datacenters.
When the databases in a sync group are in multiple datacenters, the hub should be located in the same
datacenter as the majority of the databases and database traffic.
Mixed scenarios
Apply the preceding guidelines to complex sync group configurations, such as those that are a mix of enterprise-
to-cloud and cloud-to-cloud scenarios.

Sync
Avoid slow and costly initial sync
In this section, we discuss the initial sync of a sync group. Learn how to help prevent an initial sync from taking
longer and being more costly than necessary.
How initial sync works
When you create a sync group, start with data in only one database. If you have data in multiple databases, SQL
Data Sync treats each row as a conflict that needs to be resolved. This conflict resolution causes the initial sync
to go slowly. If you have data in multiple databases, initial sync might take between several days and several
months, depending on the database size.
If the databases are in different datacenters, each row must travel between the different datacenters. This
increases the cost of an initial sync.
Recommendation
If possible, start with data in only one of the sync group's databases.
Design to avoid sync loops
A sync loop occurs when there are circular references within a sync group. In that scenario, each change in one
database is endlessly and circularly replicated through the databases in the sync group.
Ensure that you avoid sync loops, because they cause performance degradation and might significantly increase
costs.
Changes that fail to propagate
Reasons that changes fail to propagate
Changes might fail to propagate for one of the following reasons:
Schema/datatype incompatibility.
Inserting null in non-nullable columns.
Violating foreign key constraints.
What happens when changes fail to propagate?
Sync group shows that it's in a Warning state.
Details are listed in the portal UI log viewer.
If the issue is not resolved for 45 days, the database becomes out of date.

NOTE
These changes never propagate. The only way to recover in this scenario is to re-create the sync group.
Recommendation
Monitor the sync group and database health regularly through the portal and log interface.

Maintenance
Avoid out-of-date databases and sync groups
A sync group or a database in a sync group can become out of date. When a sync group's status is Out-of-date ,
it stops functioning. When a database's status is Out-of-date , data might be lost. It's best to avoid this scenario
instead of trying to recover from it.
Avoid out-of-date databases
A database's status is set to Out-of-date when it has been offline for 45 days or more. To avoid an Out-of-date
status on a database, ensure that none of the databases are offline for 45 days or more.
Avoid out-of-date sync groups
A sync group's status is set to Out-of-date when any change in the sync group fails to propagate to the rest of
the sync group for 45 days or more. To avoid an Out-of-date status on a sync group, regularly check the sync
group's history log. Ensure that all conflicts are resolved, and that changes are successfully propagated
throughout the sync group databases.
A sync group might fail to apply a change for one of these reasons:
Schema incompatibility between tables.
Data incompatibility between tables.
Inserting a row with a null value in a column that doesn't allow null values.
Updating a row with a value that violates a foreign key constraint.
To prevent out-of-date sync groups:
Update the schema to allow the values that are contained in the failed rows.
Update the foreign key values to include the values that are contained in the failed rows.
Update the data values in the failed row so they are compatible with the schema or foreign keys in the target
database.
Avoid deprovisioning issues
In some circumstances, unregistering a database with a client agent might cause sync to fail.
Scenario
1. Sync group A was created by using a SQL Database instance and a SQL Server database, which is associated
with local agent 1.
2. The same on-premises database is registered with local agent 2 (this agent is not associated with any sync
group).
3. Unregistering the on-premises database from local agent 2 removes the tracking and meta tables for sync
group A for the on-premises database.
4. Sync group A operations fail, with this error: "The current operation could not be completed because the
database is not provisioned for sync or you do not have permissions to the sync configuration tables."
Solution
To avoid this scenario, don't register a database with more than one agent.
To recover from this scenario:
1. Remove the database from each sync group that it belongs to.
2. Add the database back into each sync group that you removed it from.
3. Deploy each affected sync group (this action provisions the database).
Modifying a sync group
Don't attempt to remove a database from a sync group and then edit the sync group without first deploying one
of the changes.
Instead, first remove a database from a sync group. Then, deploy the change and wait for deprovisioning to
finish. When deprovisioning is finished, you can edit the sync group and deploy the changes.
If you attempt to remove a database and then edit a sync group without first deploying one of the changes, one
or the other operation fails. The portal interface might become inconsistent. If this happens, refresh the page to
restore the correct state.
Avoid schema refresh timeout
If you have a complex schema to sync, you may encounter an "operation timeout" during a schema refresh if the
sync metadata database has a lower SKU (example: basic).
Solution
To mitigate this issue, please scale up your sync metadata database to have a higher SKU, such as S3.

Next steps
For more information about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with Azure SQL Data Sync
Set up SQL Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in SQL Database and a database in a SQL Server
instance
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - Troubleshoot issues with Azure SQL Data Sync
Update the sync schema
With Transact-SQL - Automate the replication of schema changes in Azure SQL Data Sync
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
For more information about SQL Database, see:
SQL Database overview
Database lifecycle management
Troubleshoot issues with SQL Data Sync
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes how to troubleshoot known issues with SQL Data Sync in Azure. If there is a resolution for
an issue, it's provided here.
For an overview of SQL Data Sync, see Sync data across multiple cloud and on-premises databases with SQL
Data Sync in Azure.

IMPORTANT
SQL Data Sync does not support Azure SQL Managed Instance at this time.

Sync issues
Sync fails in the portal UI for on-premises databases that are associated with the client agent
My sync group is stuck in the processing state
I see erroneous data in my tables
I see inconsistent primary key data after a successful sync
I see a significant degradation in performance
I see this message: "Cannot insert the value NULL into the column <column>. Column does not allow
nulls." What does this mean, and how can I fix it?
How does Data Sync handle circular references? That is, when the same data is synced in multiple sync
groups, and keeps changing as a result?
Sync fails in the portal UI for on-premises databases that are associated with the client agent
Sync fails in the SQL Data Sync portal UI for on-premises databases that are associated with the client agent. On
the local computer that's running the agent, you see System.IO.IOException errors in the Event Log. The errors
say that the disk has insufficient space.
Cause . The drive has insufficient space.
Resolution . Create more space on the drive on which the %TEMP% directory is located.
My sync group is stuck in the processing state
A sync group in SQL Data Sync has been in the processing state for a long time. It doesn't respond to the stop
command, and the logs show no new entries.
Any of the following conditions might result in a sync group being stuck in the processing state:
Cause . The client agent is offline
Resolution . Be sure that the client agent is online and then try again.
Cause . The client agent is uninstalled or missing.
Resolution . If the client agent is uninstalled or otherwise missing:
1. Remove the agent XML file from the SQL Data Sync installation folder, if the file exists.
2. Install the agent on an on-premises computer (it can be the same or a different computer). Then,
submit the agent key that's generated in the portal for the agent that's showing as offline.
Cause . The SQL Data Sync service is stopped.
Resolution . Restart the SQL Data Sync service.
1. In the Star t menu, search for Ser vices .
2. In the search results, select Ser vices .
3. Find the SQL Data Sync service.
4. If the service status is Stopped , right-click the service name, and then select Star t .

NOTE
If the preceding information doesn't move your sync group out of the processing state, Microsoft Support can reset the
status of your sync group. To have your sync group status reset, in the Microsoft Q&A question page for Azure SQL
Database, create a post. In the post, include your subscription ID and the sync group ID for the group that needs to be
reset. A Microsoft Support engineer will respond to your post, and will let you know when the status has been reset.

I see erroneous data in my tables


If tables that have the same name but which are from different database schemas are included in a sync, you see
erroneous data in the tables after the sync.
Cause . The SQL Data Sync provisioning process uses the same tracking tables for tables that have the
same name but which are in different schemas. Because of this, changes from both tables are reflected in
the same tracking table. This causes erroneous data changes during sync.
Resolution . Ensure that the names of tables that are involved in a sync are different, even if the tables
belong to different schemas in a database.
I see inconsistent primary key data after a successful sync
A sync is reported as successful, and the log shows no failed or skipped rows, but you observe that primary key
data is inconsistent among the databases in the sync group.
Cause . This result is by design. Changes in any primary key column result in inconsistent data in the
rows where the primary key was changed.
Resolution . To prevent this issue, ensure that no data in a primary key column is changed. To fix this
issue after it has occurred, delete the row that has inconsistent data from all endpoints in the sync group.
Then, reinsert the row.
I see a significant degradation in performance
Your performance degrades significantly, possibly to the point where you can't even open the Data Sync UI.
Cause . The most likely cause is a sync loop. A sync loop occurs when a sync by sync group A triggers a
sync by sync group B, which then triggers a sync by sync group A. The actual situation might be more
complex, and it might involve more than two sync groups in the loop. The issue is that there is a circular
triggering of syncing that's caused by sync groups overlapping one another.
Resolution . The best fix is prevention. Ensure that you don't have circular references in your sync groups.
Any row that is synced by one sync group can't be synced by another sync group.
I see this message: "Cannot insert the value NULL into the column <column>. Column does not allow nulls."
What does this mean, and how can I fix it?
This error message indicates that one of the two following issues has occurred:
A table doesn't have a primary key. To fix this issue, add a primary key to all the tables that you're syncing.
There's a WHERE clause in your CREATE INDEX statement. Data Sync doesn't handle this condition. To fix this
issue, remove the WHERE clause or manually make the changes to all databases.
How does Data Sync handle circular references? That is, when the same data is synced in multiple sync
groups, and keeps changing as a result?
Data Sync doesn't handle circular references. Be sure to avoid them.

Client agent issues


To troubleshoot issues with the client agent, see Troubleshoot Data Sync Agent issues.

Setup and maintenance issues


I get a "disk out of space" message
I can't delete my sync group
I can't unregister a SQL Server database
I don't have sufficient privileges to start system services
A database has an "Out-of-Date" status
A sync group has an "Out-of-Date" status
A sync group can't be deleted within three minutes of uninstalling or stopping the agent
What happens when I restore a lost or corrupted database?
I get a "disk out of space" message
Cause . The "disk out of space" message might appear if leftover files need to be deleted. This might be
caused by antivirus software, or files are open when delete operations are attempted.
Resolution . Manually delete the sync files that are in the %temp% folder ( del \*sync\* /s ). Then, delete
the subdirectories in the %temp% folder.

IMPORTANT
Don't delete any files while sync is in progress.

I can't delete my sync group


Your attempt to delete a sync group fails. Any of the following scenarios might result in failure to delete a sync
group:
Cause . The client agent is offline.
Resolution . Ensure that the client agent is online and then try again.
Cause . The client agent is uninstalled or missing.
Resolution . If the client agent is uninstalled or otherwise missing:
a. Remove the agent XML file from the SQL Data Sync installation folder, if the file exists.
b. Install the agent on an on-premises computer (it can be the same or a different computer). Then,
submit the agent key that's generated in the portal for the agent that's showing as offline.
Cause . A database is offline.
Resolution . Ensure that your databases are all online.
Cause . The sync group is provisioning or syncing.
Resolution . Wait until the provisioning or sync process finishes and then retry deleting the sync group.
I can't unregister a SQL Server database
Cause . Most likely, you are trying to unregister a database that has already been deleted.
Resolution . To unregister a SQL Server database, select the database and then select Force Delete .
If this operation fails to remove the database from the sync group:
1. Stop and then restart the client agent host service:
a. Select the Star t menu.
b. In the search box, enter ser vices.msc .
c. In the Programs section of the search results pane, double-click Ser vices .
d. Right-click the SQL Data Sync service.
e. If the service is running, stop it.
f. Right-click the service, and then select Star t .
g. Check whether the database is still registered. If it is no longer registered, you're done. Otherwise,
proceed with the next step.
2. Open the client agent app (SqlAzureDataSyncAgent).
3. Select Edit Credentials , and then enter the credentials for the database.
4. Proceed with unregistration.
I don't have sufficient privileges to start system services
Cause . This error occurs in two situations:
The user name and/or the password are incorrect.
The specified user account doesn't have sufficient privileges to log on as a service.
Resolution . Grant log-on-as-a-service credentials to the user account:
1. Go to Star t > Control Panel > Administrative Tools > Local Security Policy > Local Policy >
User Rights Management .
2. Select Log on as a ser vice .
3. In the Proper ties dialog box, add the user account.
4. Select Apply , and then select OK .
5. Close all windows.
A database has an "Out-of-Date" status
Cause . SQL Data Sync removes databases that have been offline from the service for 45 days or more
(as counted from the time the database went offline). If a database is offline for 45 days or more and then
comes back online, its status is Out-of-Date .
Resolution . You can avoid an Out-of-Date status by ensuring that none of your databases go offline for
45 days or more.
If a database's status is Out-of-Date :
1. Remove the database that has an Out-of-Date status from the sync group.
2. Add the database back in to the sync group.

WARNING
You lose all changes made to this database while it was offline.

A sync group has an "Out-of-Date" status


Cause . If one or more changes fail to apply for the whole retention period of 45 days, a sync group can
become outdated.
Resolution . To avoid an Out-of-Date status for a sync group, examine the results of your sync jobs in
the history viewer on a regular basis. Investigate and resolve any changes that fail to apply.
If a sync group's status is Out-of-Date , delete the sync group and then re-create it.
A sync group can't be deleted within three minutes of uninstalling or stopping the agent
You can't delete a sync group within three minutes of uninstalling or stopping the associated SQL Data Sync
client agent.
Resolution .
1. Remove a sync group while the associated sync agents are online (recommended).
2. If the agent is offline but is installed, bring it online on the on-premises computer. Wait for the status
of the agent to appear as Online in the SQL Data Sync portal. Then, remove the sync group.
3. If the agent is offline because it was uninstalled:
a. Remove the agent XML file from the SQL Data Sync installation folder, if the file exists.
b. Install the agent on an on-premises computer (it can be the same or a different computer). Then,
submit the agent key that's generated in the portal for the agent that's showing as offline.
c. Try to delete the sync group.
What happens when I restore a lost or corrupted database?
If you restore a lost or corrupted database from a backup, there might be a non-convergence of data in the sync
groups to which the database belongs.

Next steps
For more information about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with SQL Data Sync in Azure
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Data Sync Agent - Data Sync Agent for SQL Data Sync in Azure
Best practices - Best practices for SQL Data Sync in Azure
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Update the sync schema
With Transact-SQL - Automate the replication of schema changes in SQL Data Sync in Azure
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
For more information about SQL Database, see:
SQL Database Overview
Database Lifecycle Management
Scaling out with Azure SQL Database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You can easily scale out databases in Azure SQL Database using the Elastic Database tools. These tools and
features let you use the database resources of Azure SQL Database to create solutions for transactional
workloads, and especially Software as a Service (SaaS) applications. Elastic Database features are composed of
the:
Elastic Database client library: The client library is a feature that allows you to create and maintain sharded
databases. See Get started with Elastic Database tools.
Elastic Database split-merge tool: moves data between sharded databases. This tool is useful for moving data
from a multi-tenant database to a single-tenant database (or vice-versa). See Elastic database Split-Merge
tool tutorial.
Elastic Database jobs (preview): Use jobs to manage large numbers of databases in Azure SQL Database.
Easily perform administrative operations such as schema changes, credentials management, reference data
updates, performance data collection, or tenant (customer) telemetry collection using jobs.
Elastic Database query (preview): Enables you to run a Transact-SQL query that spans multiple databases.
This enables connection to reporting tools such as Excel, Power BI, Tableau, etc.
Elastic transactions: This feature allows you to run transactions that span several databases. Elastic database
transactions are available for .NET applications using ADO .NET and integrate with the familiar programming
experience using the System.Transaction classes.
The following graphic shows an architecture that includes the Elastic Database features in relation to a
collection of databases.
In this graphic, colors of the database represent schemas. Databases with the same color share the same
schema.
1. A set of SQL databases is hosted on Azure using sharding architecture.
2. The Elastic Database client librar y is used to manage a shard set.
3. A subset of the databases is put into an elastic pool . (See What is a pool?).
4. An Elastic Database job runs scheduled or ad hoc T-SQL scripts against all databases.
5. The split-merge tool is used to move data from one shard to another.
6. The Elastic Database quer y allows you to write a query that spans all databases in the shard set.
7. Elastic transactions allow you to run transactions that span several databases.
Why use the tools?
Achieving elasticity and scale for cloud applications has been straightforward for VMs and blob storage - simply
add or subtract units, or increase power. But it has remained a challenge for stateful data processing in relational
databases. Challenges emerged in these scenarios:
Growing and shrinking capacity for the relational database part of your workload.
Managing hotspots that may arise affecting a specific subset of data - such as a busy end-customer (tenant).
Traditionally, scenarios like these have been addressed by investing in larger-scale servers to support the
application. However, this option is limited in the cloud where all processing happens on predefined commodity
hardware. Instead, distributing data and processing across many identically structured databases (a scale-out
pattern known as "sharding") provides an alternative to traditional scale-up approaches both in terms of cost
and elasticity.

Horizontal and vertical scaling


The following figure shows the horizontal and vertical dimensions of scaling, which are the basic ways the
elastic databases can be scaled.
Horizontal scaling refers to adding or removing databases in order to adjust capacity or overall performance,
also called "scaling out". Sharding, in which data is partitioned across a collection of identically structured
databases, is a common way to implement horizontal scaling.
Vertical scaling refers to increasing or decreasing the compute size of an individual database, also known as
"scaling up."
Most cloud-scale database applications use a combination of these two strategies. For example, a Software as a
Service application may use horizontal scaling to provision new end-customers and vertical scaling to allow
each end-customer's database to grow or shrink resources as needed by the workload.
Horizontal scaling is managed using the Elastic Database client library.
Vertical scaling is accomplished using Azure PowerShell cmdlets to change the service tier, or by placing
databases in an elastic pool.

Sharding
Sharding is a technique to distribute large amounts of identically structured data across a number of
independent databases. It is especially popular with cloud developers creating Software as a Service (SAAS)
offerings for end customers or businesses. These end customers are often referred to as "tenants". Sharding
may be required for any number of reasons:
The total amount of data is too large to fit within the constraints of an individual database
The transaction throughput of the overall workload exceeds the capabilities of an individual database
Tenants may require physical isolation from each other, so separate databases are needed for each tenant
Different sections of a database may need to reside in different geographies for compliance, performance, or
geopolitical reasons.
In other scenarios, such as ingestion of data from distributed devices, sharding can be used to fill a set of
databases that are organized temporally. For example, a separate database can be dedicated to each day or
week. In that case, the sharding key can be an integer representing the date (present in all rows of the sharded
tables) and queries retrieving information for a date range must be routed by the application to the subset of
databases covering the range in question.
Sharding works best when every transaction in an application can be restricted to a single value of a sharding
key. That ensures that all transactions are local to a specific database.
Multi-tenant and single-tenant
Some applications use the simplest approach of creating a separate database for each tenant. This approach is
the single tenant sharding pattern that provides isolation, backup/restore ability, and resource scaling at the
granularity of the tenant. With single tenant sharding, each database is associated with a specific tenant ID value
(or customer key value), but that key need not always be present in the data itself. It is the application's
responsibility to route each request to the appropriate database - and the client library can simplify this.

Others scenarios pack multiple tenants together into databases, rather than isolating them into separate
databases. This pattern is a typical multi-tenant sharding pattern - and it may be driven by the fact that an
application manages large numbers of small tenants. In multi-tenant sharding, the rows in the database tables
are all designed to carry a key identifying the tenant ID or sharding key. Again, the application tier is responsible
for routing a tenant's request to the appropriate database, and this can be supported by the elastic database
client library. In addition, row-level security can be used to filter which rows each tenant can access - for details,
see Multi-tenant applications with elastic database tools and row-level security. Redistributing data among
databases may be needed with the multi-tenant sharding pattern, and is facilitated by the elastic database split-
merge tool. To learn more about design patterns for SaaS applications using elastic pools, see Design Patterns
for Multi-tenant SaaS Applications with Azure SQL Database.
Move data from multiple to single -tenancy databases
When creating a SaaS application, it is typical to offer prospective customers a trial version of the software. In
this case, it is cost-effective to use a multi-tenant database for the data. However, when a prospect becomes a
customer, a single-tenant database is better since it provides better performance. If the customer had created
data during the trial period, use the split-merge tool to move the data from the multi-tenant to the new single-
tenant database.

Next steps
For a sample app that demonstrates the client library, see Get started with Elastic Database tools.
To convert existing databases to use the tools, see Migrate existing databases to scale out.
To see the specifics of the elastic pool, see Price and performance considerations for an elastic pool, or create a
new pool with elastic pools.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Distributed transactions across cloud databases
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Elastic database transactions for Azure SQL Database and Azure SQL Managed Instance allow you to run
transactions that span several databases. Elastic database transactions are available for .NET applications using
ADO.NET and integrate with the familiar programming experience using the System.Transaction classes. To get
the library, see .NET Framework 4.6.1 (Web Installer). Additionally, for managed instance distributed transactions
are available in Transact-SQL.
On premises, such a scenario usually requires running Microsoft Distributed Transaction Coordinator (MSDTC).
Since MSDTC isn't available for Platform-as-a-Service application in Azure, the ability to coordinate distributed
transactions has now been directly integrated into SQL Database or SQL Managed Instance. Applications can
connect to any database to launch distributed transactions, and one of the databases or servers will
transparently coordinate the distributed transaction, as shown in the following figure.
In this document terms "distributed transactions" and "elastic database transactions" are considered synonyms
and will be used interchangeably.

Common scenarios
Elastic database transactions enable applications to make atomic changes to data stored in several different
databases. Both SQL Database and SQL Managed Instance support client-side development experiences in C#
and .NET. A server-side experience (code written in stored procedures or server-side scripts) using Transact-SQL
is available for SQL Managed Instance only.
IMPORTANT
Running elastic database transactions between Azure SQL Database and Azure SQL Managed Instance is not supported.
Elastic database transaction can only span across a set of databases in SQL Database or a set databases across managed
instances.

Elastic database transactions target the following scenarios:


Multi-database applications in Azure: With this scenario, data is vertically partitioned across several
databases in SQL Database or SQL Managed Instance such that different kinds of data reside on different
databases. Some operations require changes to data, which is kept in two or more databases. The application
uses elastic database transactions to coordinate the changes across databases and ensure atomicity.
Sharded database applications in Azure: With this scenario, the data tier uses the Elastic Database client
library or self-sharding to horizontally partition the data across many databases in SQL Database or SQL
Managed Instance. One prominent use case is the need to perform atomic changes for a sharded multi-
tenant application when changes span tenants. Think for instance of a transfer from one tenant to another,
both residing on different databases. A second case is fine-grained sharding to accommodate capacity needs
for a large tenant, which in turn typically implies that some atomic operations need to stretch across several
databases used for the same tenant. A third case is atomic updates to reference data that are replicated
across databases. Atomic, transacted, operations along these lines can now be coordinated across several
databases. Elastic database transactions use two phase commit to ensure transaction atomicity across
databases. It's a good fit for transactions that involve fewer than 100 databases at a time within a single
transaction. These limits aren't enforced, but one should expect performance and success rates for elastic
database transactions to suffer when exceeding these limits.

Installation and migration


The capabilities for elastic database transactions are provided through updates to the .NET libraries
System.Data.dll and System.Transactions.dll. The DLLs ensure that two-phase commit is used where necessary to
ensure atomicity. To start developing applications using elastic database transactions, install .NET Framework
4.6.1 or a later version. When running on an earlier version of the .NET framework, transactions will fail to
promote to a distributed transaction and an exception will be raised.
After installation, you can use the distributed transaction APIs in System.Transactions with connections to SQL
Database and SQL Managed Instance. If you have existing MSDTC applications using these APIs, rebuild your
existing applications for .NET 4.6 after installing the 4.6.1 Framework. If your projects target .NET 4.6, they'll
automatically use the updated DLLs from the new Framework version and distributed transaction API calls in
combination with connections to SQL Database or SQL Managed Instance will now succeed.
Remember that elastic database transactions don't require installing MSDTC. Instead, elastic database
transactions are directly managed by and within the service. This significantly simplifies cloud scenarios since a
deployment of MSDTC isn't necessary to use distributed transactions with SQL Database or SQL Managed
Instance. Section 4 explains in more detail how to deploy elastic database transactions and the required .NET
framework together with your cloud applications to Azure.

.NET installation for Azure Cloud Services


Azure provides several offerings to host .NET applications. A comparison of the different offerings is available in
Azure App Service, Cloud Services, and Virtual Machines comparison. If the guest OS of the offering is smaller
than .NET 4.6.1 required for elastic transactions, you need to upgrade the guest OS to 4.6.1.
For Azure App Service, upgrades to the guest OS are currently not supported. For Azure Virtual Machines,
simply log into the VM and run the installer for the latest .NET framework. For Azure Cloud Services, you need
to include the installation of a newer .NET version into the startup tasks of your deployment. The concepts and
steps are documented in Install .NET on a Cloud Service Role.
Note that the installer for .NET 4.6.1 may require more temporary storage during the bootstrapping process on
Azure cloud services than the installer for .NET 4.6. To ensure a successful installation, you need to increase
temporary storage for your Azure cloud service in your ServiceDefinition.csdef file in the LocalResources section
and the environment settings of your startup task, as shown in the following sample:

<LocalResources>
...
<LocalStorage name="TEMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
<LocalStorage name="TMP" sizeInMB="5000" cleanOnRoleRecycle="false" />
</LocalResources>
<Startup>
<Task commandLine="install.cmd" executionContext="elevated" taskType="simple">
<Environment>
...
<Variable name="TEMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TEMP']/@path" />
</Variable>
<Variable name="TMP">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/LocalResources/LocalResource[@name='TMP']/@path" />
</Variable>
</Environment>
</Task>
</Startup>

.NET development experience


Multi-database applications
The following sample code uses the familiar programming experience with .NET System.Transactions. The
TransactionScope class establishes an ambient transaction in .NET. (An "ambient transaction" is one that lives in
the current thread.) All connections opened within the TransactionScope participate in the transaction. If
different databases participate, the transaction is automatically elevated to a distributed transaction. The
outcome of the transaction is controlled by setting the scope to complete to indicate a commit.

using (var scope = new TransactionScope())


{
using (var conn1 = new SqlConnection(connStrDb1))
{
conn1.Open();
SqlCommand cmd1 = conn1.CreateCommand();
cmd1.CommandText = string.Format("insert into T1 values(1)");
cmd1.ExecuteNonQuery();
}
using (var conn2 = new SqlConnection(connStrDb2))
{
conn2.Open();
var cmd2 = conn2.CreateCommand();
cmd2.CommandText = string.Format("insert into T2 values(2)");
cmd2.ExecuteNonQuery();
}
scope.Complete();
}

Sharded database applications


Elastic database transactions for SQL Database and SQL Managed Instance also support coordinating
distributed transactions where you use the OpenConnectionForKey method of the elastic database client library
to open connections for a scaled out data tier. Consider cases where you need to guarantee transactional
consistency for changes across several different sharding key values. Connections to the shards hosting the
different sharding key values are brokered using OpenConnectionForKey. In the general case, the connections
can be to different shards such that ensuring transactional guarantees requires a distributed transaction. The
following code sample illustrates this approach. It assumes that a variable called shardmap is used to represent
a shard map from the elastic database client library:

using (var scope = new TransactionScope())


{
using (var conn1 = shardmap.OpenConnectionForKey(tenantId1, credentialsStr))
{
SqlCommand cmd1 = conn1.CreateCommand();
cmd1.CommandText = string.Format("insert into T1 values(1)");
cmd1.ExecuteNonQuery();
}
using (var conn2 = shardmap.OpenConnectionForKey(tenantId2, credentialsStr))
{
var cmd2 = conn2.CreateCommand();
cmd2.CommandText = string.Format("insert into T1 values(2)");
cmd2.ExecuteNonQuery();
}
scope.Complete();
}

Transact-SQL development experience


A server-side distributed transactions using Transact-SQL are available only for Azure SQL Managed Instance.
Distributed transaction can be executed only between Managed Instances that belong to the same Server trust
group. In this scenario, Managed Instances need to use linked server to reference each other.
The following sample Transact-SQL code uses BEGIN DISTRIBUTED TRANSACTION to start distributed
transaction.
-- Configure the Linked Server
-- Add one Azure SQL Managed Instance as Linked Server
EXEC sp_addlinkedserver
@server='RemoteServer', -- Linked server name
@srvproduct='',
@provider='sqlncli', -- SQL Server Native Client
@datasrc='managed-instance-server.46e7afd5bc81.database.windows.net' -- SQL Managed Instance
endpoint

-- Add credentials and options to this Linked Server


EXEC sp_addlinkedsrvlogin
@rmtsrvname = 'RemoteServer', -- Linked server name
@useself = 'false',
@rmtuser = '<login_name>', -- login
@rmtpassword = '<secure_password>' -- password

USE AdventureWorks2012;
GO
SET XACT_ABORT ON;
GO
BEGIN DISTRIBUTED TRANSACTION;
-- Delete candidate from local instance.
DELETE AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
-- Delete candidate from remote instance.
DELETE RemoteServer.AdventureWorks2012.HumanResources.JobCandidate
WHERE JobCandidateID = 13;
COMMIT TRANSACTION;
GO

Combining .NET and Transact-SQL development experience


.NET applications that use System.Transaction classes can combine TransactionScope class with Transact-SQL
statement BEGIN DISTRIBUTED TRANSACTION. Within TransactionScope, inner transaction that executes BEGIN
DISTRIBUTED TRANSACTION will explicitly be promoted to distributed transaction. Also, when second
SqlConnecton is opened within the TransactionScope it will be implicitly promoted to distributed transaction.
Once distributed transaction is started, all subsequent transactions requests, whether they are coming from .NET
or Transact-SQL, will join the parent distributed transaction. As consequence all nested transaction scopes
initiated by BEGIN statement will end up in same transaction and COMMIT/ROLLBACK statements will have
following effect on overall outcome:
COMMIT statement will not have any effect on transaction scope initiated by BEGIN statement, that is, no
results will be committed before Complete() method is invoked on TransactionScope object. If
TransactionScope object is destroyed before being completed, all changes done within the scope are rolled
back.
ROLLBACK statement will cause entire TransactionScope to roll back. Any attempts to enlist new transactions
within TransactionScope will fail afterwards, as well as attempt to invoke Complete() on TransactionScope
object.
Here is an example where transaction is explicitly promoted to distributed transaction with Transact-SQL.
using (TransactionScope s = new TransactionScope())
{
using (SqlConnection conn = new SqlConnection(DB0_ConnectionString)
{
conn.Open();

// Transaction is here promoted to distributed by BEGIN statement


//
Helper.ExecuteNonQueryOnOpenConnection(conn, "BEGIN DISTRIBUTED TRAN");
// ...
}

using (SqlConnection conn2 = new SqlConnection(DB1_ConnectionString)


{
conn2.Open();
// ...
}

s.Complete();
}

Following example shows a transaction that is implicitly promoted to distributed transaction once the second
SqlConnecton was started within the TransactionScope.

using (TransactionScope s = new TransactionScope())


{
using (SqlConnection conn = new SqlConnection(DB0_ConnectionString)
{
conn.Open();
// ...
}

using (SqlConnection conn = new SqlConnection(DB1_ConnectionString)


{
// Because this is second SqlConnection within TransactionScope transaction is here implicitly
promoted distributed.
//
conn.Open();
Helper.ExecuteNonQueryOnOpenConnection(conn, "BEGIN DISTRIBUTED TRAN");
Helper.ExecuteNonQueryOnOpenConnection(conn, lsQuery);
// ...
}

s.Complete();
}

Transactions for SQL Database


Elastic database transactions are supported across different servers in Azure SQL Database. When transactions
cross server boundaries, the participating servers first need to be entered into a mutual communication
relationship. Once the communication relationship has been established, any database in any of the two servers
can participate in elastic transactions with databases from the other server. With transactions spanning more
than two servers, a communication relationship needs to be in place for any pair of servers.
Use the following PowerShell cmdlets to manage cross-server communication relationships for elastic database
transactions:
New-AzSqlSer verCommunicationLink : Use this cmdlet to create a new communication relationship
between two servers in Azure SQL Database. The relationship is symmetric, which means both servers can
initiate transactions with the other server.
Get-AzSqlSer verCommunicationLink : Use this cmdlet to retrieve existing communication relationships
and their properties.
Remove-AzSqlSer verCommunicationLink : Use this cmdlet to remove an existing communication
relationship.

Transactions for SQL Managed Instance


Distributed transactions are supported across databases within multiple instances. When transactions cross
managed instance boundaries, the participating instances need to be in a mutual security and communication
relationship. This is done by creating a Server Trust Group, which can be done by using the Azure portal or
Azure PowerShell or the Azure CLI. If instances are not on the same Virtual network then you must configure
Virtual network peering and Network security group inbound and outbound rules need to allow ports 5024 and
11000-12000 on all participating Virtual networks.

The following diagram shows a Server Trust Group with managed instances that can execute distributed
transactions with .NET or Transact-SQL:
Monitoring transaction status
Use Dynamic Management Views (DMVs) to monitor status and progress of your ongoing elastic database
transactions. All DMVs related to transactions are relevant for distributed transactions in SQL Database and SQL
Managed Instance. You can find the corresponding list of DMVs here: Transaction Related Dynamic Management
Views and Functions (Transact-SQL).
These DMVs are particularly useful:
sys.dm_tran_active_transactions : Lists currently active transactions and their status. The UOW (Unit Of
Work) column can identify the different child transactions that belong to the same distributed transaction. All
transactions within the same distributed transaction carry the same UOW value. For more information, see
the DMV documentation.
sys.dm_tran_database_transactions : Provides additional information about transactions, such as
placement of the transaction in the log. For more information, see the DMV documentation.
sys.dm_tran_locks : Provides information about the locks that are currently held by ongoing transactions.
For more information, see the DMV documentation.

Limitations
The following limitations currently apply to elastic database transactions in SQL Database:
Only transactions across databases in SQL Database are supported. Other X/Open XA resource providers and
databases outside of SQL Database can't participate in elastic database transactions. That means that elastic
database transactions can't stretch across on premises SQL Server and Azure SQL Database. For distributed
transactions on premises, continue to use MSDTC.
Only client-coordinated transactions from a .NET application are supported. Server-side support for T-SQL
such as BEGIN DISTRIBUTED TRANSACTION is planned, but not yet available.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
The following limitations currently apply to distributed transactions in SQL Managed Instance:
Only transactions across databases in managed instances are supported. Other X/Open XA resource
providers and databases outside of Azure SQL Managed Instance can't participate in distributed transactions.
That means that distributed transactions can't stretch across on-premises SQL Server and Azure SQL
Managed Instance. For distributed transactions on premises, continue to use MSDTC.
Transactions across WCF services aren't supported. For example, you have a WCF service method that
executes a transaction. Enclosing the call within a transaction scope will fail as a
System.ServiceModel.ProtocolException.
Azure SQL Managed Instance must be part of a Server trust group in order to participate in distributed
transaction.
Limitations of Server trust groups affect distributed transactions.
Managed Instances that participate in distributed transactions need to have connectivity over private
endpoints (using private IP address from the virtual network where they are deployed) and need to be
mutually referenced using private FQDNs. Client applications can use distributed transactions on private
endpoints. Additionally, in cases when Transact-SQL leverages linked servers referencing private endpoints,
client applications can use distributed transactions on public endpoints as well. This limitation is explained on
the following diagram.

Next steps
For questions, reach out to us on the Microsoft Q&A question page for SQL Database.
For feature requests, add them to the SQL Database feedback forum or SQL Managed Instance forum.
Azure SQL Database elastic query overview
(preview)
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The elastic query feature (in preview) enables you to run a Transact-SQL query that spans multiple databases in
Azure SQL Database. It allows you to perform cross-database queries to access remote tables, and to connect
Microsoft and third-party tools (Excel, Power BI, Tableau, etc.) to query across data tiers with multiple databases.
Using this feature, you can scale out queries to large data tiers and visualize the results in business intelligence
(BI) reports.

Why use elastic queries


Azure SQL Database
Query across databases in Azure SQL Database completely in T-SQL. This allows for read-only querying of
remote databases and provides an option for current SQL Server customers to migrate applications using three-
and four-part names or linked server to SQL Database.
Available on all service tiers
Elastic query is supported in all service tiers of Azure SQL Database. See the section on Preview Limitations
below on performance limitations for lower service tiers.
Push parameters to remote databases
Elastic queries can now push SQL parameters to the remote databases for execution.
Stored procedure execution
Execute remote stored procedure calls or remote functions using sp_execute _remote.
Flexibility
External tables with elastic query can refer to remote tables with a different schema or table name.

Elastic query scenarios


The goal is to facilitate querying scenarios where multiple databases contribute rows into a single overall result.
The query can either be composed by the user or application directly, or indirectly through tools that are
connected to the database. This is especially useful when creating reports, using commercial BI or data
integration tools, or any application that cannot be changed. With an elastic query, you can query across several
databases using the familiar SQL Server connectivity experience in tools such as Excel, Power BI, Tableau, or
Cognos. An elastic query allows easy access to an entire collection of databases through queries issued by SQL
Server Management Studio or Visual Studio, and facilitates cross-database querying from Entity Framework or
other ORM environments. Figure 1 shows a scenario where an existing cloud application (which uses the elastic
database client library) builds on a scaled-out data tier, and an elastic query is used for cross-database
reporting.
Figure 1 Elastic query used on scaled-out data tier
Customer scenarios for elastic query are characterized by the following topologies:
Ver tical par titioning - Cross-database queries (Topology 1): The data is partitioned vertically between
a number of databases in a data tier. Typically, different sets of tables reside on different databases. That
means that the schema is different on different databases. For instance, all tables for inventory are on one
database while all accounting-related tables are on a second database. Common use cases with this topology
require one to query across or to compile reports across tables in several databases.
Horizontal Par titioning - Sharding (Topology 2): Data is partitioned horizontally to distribute rows across
a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach
is also called "sharding". Sharding can be performed and managed using (1) the elastic database tools
libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards.
Shards are typically databases within an elastic pool. You can think of elastic query as an efficient way for
querying all databases of elastic pool at once, as long as databases share the common schema.

NOTE
Elastic query works best for reporting scenarios where most of the processing (filtering, aggregation) can be performed
on the external source side. It is not suitable for ETL operations where large amount of data is being transferred from
remote database(s). For heavy reporting workloads or data warehousing scenarios with more complex queries, also
consider using Azure Synapse Analytics.

Vertical partitioning - cross-database queries


To begin coding, see Getting started with cross-database query (vertical partitioning).
An elastic query can be used to make data located in a database in SQL Database available to other databases in
SQL Database. This allows queries from one database to refer to tables in any other remote database in SQL
Database. The first step is to define an external data source for each remote database. The external data source is
defined in the local database from which you want to gain access to tables located on the remote database. No
changes are necessary on the remote database. For typical vertical partitioning scenarios where different
databases have different schemas, elastic queries can be used to implement common use cases such as access
to reference data and cross-database querying.

IMPORTANT
You must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with the ALTER DATABASE
permission. ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.

Reference data : The topology is used for reference data management. In the figure below, two tables (T1 and
T2) with reference data are kept on a dedicated database. Using an elastic query, you can now access tables T1
and T2 remotely from other databases, as shown in the figure. Use topology 1 if reference tables are small or
remote queries into reference table have selective predicates.
Figure 2 Vertical partitioning - Using elastic query to query reference data

Cross-database quer ying : Elastic queries enable use cases that require querying across several databases in
SQL Database. Figure 3 shows four different databases: CRM, Inventory, HR, and Products. Queries performed in
one of the databases also need access to one or all the other databases. Using an elastic query, you can
configure your database for this case by running a few simple DDL statements on each of the four databases.
After this one-time configuration, access to a remote table is as simple as referring to a local table from your T-
SQL queries or from your BI tools. This approach is recommended if the remote queries do not return large
results.
Figure 3 Vertical partitioning - Using elastic query to query across various databases
The following steps configure elastic database queries for vertical partitioning scenarios that require access to a
table located on remote databases in SQL Database with the same schema:
CREATE MASTER KEY mymasterkey
CREATE DATABASE SCOPED CREDENTIAL mycredential
CREATE/DROP EXTERNAL DATA SOURCE mydatasource of type RDBMS
CREATE/DROP EXTERNAL TABLE mytable
After running the DDL statements, you can access the remote table "mytable" as though it were a local table.
Azure SQL Database automatically opens a connection to the remote database, processes your request on the
remote database, and returns the results.

Horizontal partitioning - sharding


Using elastic query to perform reporting tasks over a sharded, that is, horizontally partitioned, data tier requires
an elastic database shard map to represent the databases of the data tier. Typically, only a single shard map is
used in this scenario and a dedicated database with elastic query capabilities (head node) serves as the entry
point for reporting queries. Only this dedicated database needs access to the shard map. Figure 4 illustrates this
topology and its configuration with the elastic query database and shard map. For more information about the
elastic database client library and creating shard maps, see Shard map management.
Figure 4 Horizontal partitioning - Using elastic query for reporting over sharded data tiers
NOTE
Elastic Query Database (head node) can be separate database, or it can be the same database that hosts the shard map.
Whatever configuration you choose, make sure that service tier and compute size of that database is high enough to
handle the expected amount of login/query requests.

The following steps configure elastic database queries for horizontal partitioning scenarios that require access
to a set of tables located on (typically) several remote databases in SQL Database:
CREATE MASTER KEY mymasterkey
CREATE DATABASE SCOPED CREDENTIAL mycredential
Create a shard map representing your data tier using the elastic database client library.
CREATE/DROP EXTERNAL DATA SOURCE mydatasource of type SHARD_MAP_MANAGER
CREATE/DROP EXTERNAL TABLE mytable
Once you have performed these steps, you can access the horizontally partitioned table "mytable" as though it
were a local table. Azure SQL Database automatically opens multiple parallel connections to the remote
databases where the tables are physically stored, processes the requests on the remote databases, and returns
the results. More information on the steps required for the horizontal partitioning scenario can be found in
elastic query for horizontal partitioning.
To begin coding, see Getting started with elastic query for horizontal partitioning (sharding).

IMPORTANT
Successful execution of elastic query over a large set of databases relies heavily on the availability of each of databases
during the query execution. If one of databases is not available, entire query will fail. If you plan to query hundreds or
thousands of databases at once, make sure your client application has retry logic embedded, or consider leveraging Elastic
Database Jobs (preview) and querying smaller subsets of databases, consolidating results of each query into a single
destination.

T-SQL querying
Once you have defined your external data sources and your external tables, you can use regular SQL Server
connection strings to connect to the databases where you defined your external tables. You can then run T-SQL
statements over your external tables on that connection with the limitations outlined below. You can find more
information and examples of T-SQL queries in the documentation topics for horizontal partitioning and vertical
partitioning.
Connectivity for tools
You can use regular SQL Server connection strings to connect your applications and BI or data integration tools
to databases that have external tables. Make sure that SQL Server is supported as a data source for your tool.
Once connected, refer to the elastic query database and the external tables in that database just like you would
do with any other SQL Server database that you connect to with your tool.

IMPORTANT
Elastic queries are only supported when connecting with SQL Server Authentication.

Cost
Elastic query is included in the cost of Azure SQL Database. Note that topologies where your remote databases
are in a different data center than the elastic query endpoint are supported, but data egress from remote
databases is charged regularly Azure rates.

Preview limitations
Running your first elastic query can take up to a few minutes on smaller resources and Standard and General
Purpose service tier. This time is necessary to load the elastic query functionality; loading performance
improves with higher service tiers and compute sizes.
Scripting of external data sources or external tables from SSMS or SSDT is not yet supported.
Import/Export for SQL Database does not yet support external data sources and external tables. If you need
to use Import/Export, drop these objects before exporting and then re-create them after importing.
Elastic query currently only supports read-only access to external tables. You can, however, use full Transact-
SQL functionality on the database where the external table is defined. This can be useful to, e.g., persist
temporary results using, for example, SELECT <column_list> INTO <local_table>, or to define stored
procedures on the elastic query database that refer to external tables.
Except for nvarchar(max), LOB types (including spatial types) are not supported in external table definitions.
As a workaround, you can create a view on the remote database that casts the LOB type into nvarchar(max),
define your external table over the view instead of the base table and then cast it back into the original LOB
type in your queries.
Columns of nvarchar(max) data type in result set disable advanced batching technics used in Elastic Query
implementation and may affect performance of query for an order of magnitude, or even two orders of
magnitude in non-canonical use cases where large amount of non-aggregated data is being transferred as a
result of query.
Column statistics over external tables are currently not supported. Table statistics are supported, but need to
be created manually.
Cursors are not supported for external tables in Azure SQL Database.
Elastic query works with Azure SQL Database only. You cannot use it for querying a SQL Server instance.

Share your Feedback


Share feedback on your experience with elastic queries with us below, on the MSDN forums, or on Stack
Overflow. We are interested in all kinds of feedback about the service (defects, rough edges, feature gaps).

Next steps
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Building scalable cloud databases
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Scaling out databases can be easily accomplished using scalable tools and features for Azure SQL Database. In
particular, you can use the Elastic Database client librar y to create and manage scaled-out databases. This
feature lets you easily develop sharded applications using hundreds—or even thousands—of databases in
Azure SQL Database.
To download:
The Java version of the library, see Maven Central Repository.
The .NET version of the library, see NuGet.

Documentation
1. Get started with Elastic Database tools
2. Elastic Database features
3. Shard map management
4. Migrate existing databases to scale out
5. Data dependent routing
6. Multi-shard queries
7. Adding a shard using Elastic Database tools
8. Multi-tenant applications with Elastic Database tools and row-level security
9. Upgrade client library apps
10. Elastic queries overview
11. Elastic Database tools glossary
12. Elastic Database client library with Entity Framework
13. Elastic Database client library with Dapper
14. Split-merge tool
15. Performance counters for shard map manager
16. FAQ for Elastic Database tools

Client capabilities
Scaling out applications using sharding presents challenges for both the developer as well as the administrator.
The client library simplifies the management tasks by providing tools that let both developers and
administrators manage scaled-out databases. In a typical example, there are many databases, known as "shards,"
to manage. Customers are co-located in the same database, and there is one database per customer (a single-
tenant scheme). The client library includes these features:
Shard map management : A special database called the "shard map manager" is created. Shard map
management is the ability for an application to manage metadata about its shards. Developers can use
this functionality to register databases as shards, describe mappings of individual sharding keys or key
ranges to those databases, and maintain this metadata as the number and composition of databases
evolves to reflect capacity changes. Without the Elastic Database client library, you would need to spend a
lot of time writing the management code when implementing sharding. For details, see Shard map
management.
Data dependent routing : Imagine a request coming into the application. Based on the sharding key
value of the request, the application needs to determine the correct database based on the key value. It
then opens a connection to the database to process the request. Data dependent routing provides the
ability to open connections with a single easy call into the shard map of the application. Data dependent
routing was another area of infrastructure code that is now covered by functionality in the Elastic
Database client library. For details, see Data dependent routing.
Multi-shard queries (MSQ) : Multi-shard querying works when a request involves several (or all)
shards. A multi-shard query executes the same T-SQL code on all shards or a set of shards. The results
from the participating shards are merged into an overall result set using UNION ALL semantics. The
functionality as exposed through the client library handles many tasks, including: connection
management, thread management, fault handling, and intermediate results processing. MSQ can query
up to hundreds of shards. For details, see Multi-shard querying.
In general, customers using Elastic Database tools can expect to get full T-SQL functionality when submitting
shard-local operations as opposed to cross-shard operations that have their own semantics.

Next steps
Elastic Database client library (Java, .NET) - to download the library.
Get started with Elastic Database tools - to try the sample app that demonstrates client functions.
GitHub (Java, .NET) - to make contributions to the code.
Azure SQL Database elastic query overview - to use elastic queries.
Moving data between scaled-out cloud databases - for instructions on using the split-merge tool .

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Scale out databases with the shard map manager
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


To easily scale out databases on Azure SQL Database, use a shard map manager. The shard map manager is a
special database that maintains global mapping information about all shards (databases) in a shard set. The
metadata allows an application to connect to the correct database based upon the value of the sharding key . In
addition, every shard in the set contains maps that track the local shard data (known as shardlets ).

Understanding how these maps are constructed is essential to shard map management. This is done using the
ShardMapManager class (Java, .NET), found in the Elastic Database client library to manage shard maps.

Shard maps and shard mappings


For each shard, you must select the type of shard map to create. The choice depends on the database
architecture:
1. Single tenant per database
2. Multiple tenants per database (two types):
a. List mapping
b. Range mapping
For a single-tenant model, create a list-mapping shard map. The single-tenant model assigns one database per
tenant. This is an effective model for SaaS developers as it simplifies shard map management.
The multi-tenant model assigns several tenants to an individual database (and you can distribute groups of
tenants across multiple databases). Use this model when you expect each tenant to have small data needs. In
this model, assign a range of tenants to a database using range mapping .

Or you can implement a multi-tenant database model using a list mapping to assign multiple tenants to an
individual database. For example, DB1 is used to store information about tenant ID 1 and 5, and DB2 stores data
for tenant 7 and tenant 10.

Supported types for sharding keys


Elastic Scale support the following types as sharding keys:
. N ET JAVA

integer integer

long long

guid uuid

byte[] byte[]

datetime timestamp

timespan duration

datetimeoffset offsetdatetime

List and range shard maps


Shard maps can be constructed using lists of individual sharding key values , or they can be constructed
using ranges of sharding key values .
List shard maps
Shards contain shardlets and the mapping of shardlets to shards is maintained by a shard map. A list shard
map is an association between the individual key values that identify the shardlets and the databases that serve
as shards. List mappings are explicit and different key values can be mapped to the same database. For
example, key value 1 maps to Database A, and key values 3 and 6 both maps to Database B.

K EY SH A RD LO C AT IO N

1 Database_A

3 Database_B

4 Database_C

6 Database_B

... ...

Range shard maps


In a range shard map , the key range is described by a pair [Low Value, High Value) where the Low Value is
the minimum key in the range, and the High Value is the first value higher than the range.
For example, [0, 100) includes all integers greater than or equal 0 and less than 100. Note that multiple ranges
can point to the same database, and disjoint ranges are supported (for example, [100,200) and [400,600) both
point to Database C in the following example.)

K EY SH A RD LO C AT IO N

[1,50) Database_A

[50,100) Database_B
K EY SH A RD LO C AT IO N

[100,200) Database_C

[400,600) Database_C

... ...

Each of the tables shown above is a conceptual example of a ShardMap object. Each row is a simplified
example of an individual PointMapping (for the list shard map) or RangeMapping (for the range shard map)
object.

Shard map manager


In the client library, the shard map manager is a collection of shard maps. The data managed by a
ShardMapManager instance is kept in three places:
1. Global Shard Map (GSM) : You specify a database to serve as the repository for all of its shard maps and
mappings. Special tables and stored procedures are automatically created to manage the information. This is
typically a small database and lightly accessed, and it should not be used for other needs of the application.
The tables are in a special schema named __ShardManagement .
2. Local Shard Map (LSM) : Every database that you specify to be a shard is modified to contain several small
tables and special stored procedures that contain and manage shard map information specific to that shard.
This information is redundant with the information in the GSM, and it allows the application to validate
cached shard map information without placing any load on the GSM; the application uses the LSM to
determine if a cached mapping is still valid. The tables corresponding to the LSM on each shard are also in
the schema __ShardManagement .
3. Application cache : Each application instance accessing a ShardMapManager object maintains a local in-
memory cache of its mappings. It stores routing information that has recently been retrieved.

Constructing a ShardMapManager
A ShardMapManager object is constructed using a factory (Java, .NET) pattern. The
ShardMapManagerFactor y.GetSqlShardMapManager (Java, .NET) method takes credentials (including the
server name and database name holding the GSM) in the form of a ConnectionString and returns an instance
of a ShardMapManager .
Please Note: The ShardMapManager should be instantiated only once per app domain, within the
initialization code for an application. Creation of additional instances of ShardMapManager in the same app
domain results in increased memory and CPU utilization of the application. A ShardMapManager can contain
any number of shard maps. While a single shard map may be sufficient for many applications, there are times
when different sets of databases are used for different schema or for unique purposes; in those cases multiple
shard maps may be preferable.
In this code, an application tries to open an existing ShardMapManager with the TryGetSqlShardMapManager
(Java, .NET method. If objects representing a Global ShardMapManager (GSM) do not yet exist inside the
database, the client library creates them using the CreateSqlShardMapManager (Java, .NET) method.
// Try to get a reference to the Shard Map Manager in the shardMapManager database.
// If it doesn't already exist, then create it.
ShardMapManager shardMapManager = null;
boolean shardMapManagerExists =
ShardMapManagerFactory.tryGetSqlShardMapManager(shardMapManagerConnectionString,ShardMapManagerLoadPolicy.La
zy, refShardMapManager);
shardMapManager = refShardMapManager.argValue;

if (shardMapManagerExists) {
ConsoleUtils.writeInfo("Shard Map %s already exists", shardMapManager);
}
else {
// The Shard Map Manager does not exist, so create it
shardMapManager = ShardMapManagerFactory.createSqlShardMapManager(shardMapManagerConnectionString);
ConsoleUtils.writeInfo("Created Shard Map %s", shardMapManager);
}

// Try to get a reference to the Shard Map Manager via the Shard Map Manager database.
// If it doesn't already exist, then create it.
ShardMapManager shardMapManager;
bool shardMapManagerExists = ShardMapManagerFactory.TryGetSqlShardMapManager(
connectionString,
ShardMapManagerLoadPolicy.Lazy,
out shardMapManager);

if (shardMapManagerExists)
{
Console.WriteLine("Shard Map Manager already exists");
}
else
{
// Create the Shard Map Manager.
ShardMapManagerFactory.CreateSqlShardMapManager(connectionString);
Console.WriteLine("Created SqlShardMapManager");

shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(
connectionString,
ShardMapManagerLoadPolicy.Lazy);

// The connectionString contains server name, database name, and admin credentials for privileges on both
the GSM and the shards themselves.
}

For the .NET version, you can use PowerShell to create a new Shard Map Manager. An example is available here.

Get a RangeShardMap or ListShardMap


After creating a shard map manager, you can get the RangeShardMap (Java, .NET) or ListShardMap (Java, .NET)
using the TryGetRangeShardMap (Java, .NET), the TryGetListShardMap (Java, .NET), or the GetShardMap (Java,
.NET) method.
// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists.
static <T> RangeShardMap<T> createOrGetRangeShardMap(ShardMapManager shardMapManager,
String shardMapName,
ShardKeyType keyType) {
// Try to get a reference to the Shard Map.
ReferenceObjectHelper<RangeShardMap<T>> refRangeShardMap = new ReferenceObjectHelper<>(null);
boolean isGetSuccess = shardMapManager.tryGetRangeShardMap(shardMapName, keyType, refRangeShardMap);
RangeShardMap<T> shardMap = refRangeShardMap.argValue;

if (isGetSuccess && shardMap != null) {


ConsoleUtils.writeInfo("Shard Map %1$s already exists", shardMap.getName());
}
else {
// The Shard Map does not exist, so create it
try {
shardMap = shardMapManager.createRangeShardMap(shardMapName, keyType);
}
catch (Exception e) {
e.printStackTrace();
}
ConsoleUtils.writeInfo("Created Shard Map %1$s", shardMap.getName());
}

return shardMap;
}

// Creates a new Range Shard Map with the specified name, or gets the Range Shard Map if it already exists.
public static RangeShardMap<T> CreateOrGetRangeShardMap<T>(ShardMapManager shardMapManager, string
shardMapName)
{
// Try to get a reference to the Shard Map.
RangeShardMap<T> shardMap;
bool shardMapExists = shardMapManager.TryGetRangeShardMap(shardMapName, out shardMap);

if (shardMapExists)
{
ConsoleUtils.WriteInfo("Shard Map {0} already exists", shardMap.Name);
}
else
{
// The Shard Map does not exist, so create it
shardMap = shardMapManager.CreateRangeShardMap<T>(shardMapName);
ConsoleUtils.WriteInfo("Created Shard Map {0}", shardMap.Name);
}

return shardMap;
}

Shard map administration credentials


Applications that administer and manipulate shard maps are different from those that use the shard maps to
route connections.
To administer shard maps (add or change shards, shard maps, shard mappings, etc.) you must instantiate the
ShardMapManager using credentials that have read/write privileges on both the GSM database and
on each database that ser ves as a shard . The credentials must allow for writes against the tables in both
the GSM and LSM as shard map information is entered or changed, as well as for creating LSM tables on new
shards.
See Credentials used to access the Elastic Database client library.
Only metadata affected
Methods used for populating or changing the ShardMapManager data do not alter the user data stored in the
shards themselves. For example, methods such as CreateShard , DeleteShard , UpdateMapping , etc. affect
the shard map metadata only. They do not remove, add, or alter user data contained in the shards. Instead, these
methods are designed to be used in conjunction with separate operations you perform to create or remove
actual databases, or that move rows from one shard to another to rebalance a sharded environment. (The split-
merge tool included with elastic database tools makes use of these APIs along with orchestrating actual data
movement between shards.) See Scaling using the Elastic Database split-merge tool.

Data dependent routing


The shard map manager is used in applications that require database connections to perform the app-specific
data operations. Those connections must be associated with the correct database. This is known as Data
Dependent Routing . For these applications, instantiate a shard map manager object from the factory using
credentials that have read-only access on the GSM database. Individual requests for later connections supply
credentials necessary for connecting to the appropriate shard database.
Note that these applications (using ShardMapManager opened with read-only credentials) cannot make
changes to the maps or mappings. For those needs, create administrative-specific applications or PowerShell
scripts that supply higher-privileged credentials as discussed earlier. See Credentials used to access the Elastic
Database client library.
For more information, see Data dependent routing.

Modifying a shard map


A shard map can be changed in different ways. All of the following methods modify the metadata describing the
shards and their mappings, but they do not physically modify data within the shards, nor do they create or
delete the actual databases. Some of the operations on the shard map described below may need to be
coordinated with administrative actions that physically move data or that add and remove databases serving as
shards.
These methods work together as the building blocks available for modifying the overall distribution of data in
your sharded database environment.
To add or remove shards: use CreateShard (Java, .NET) and DeleteShard (Java, .NET) of the shardmap
(Java, .NET) class.
The server and database representing the target shard must already exist for these operations to execute.
These methods do not have any impact on the databases themselves, only on metadata in the shard map.
To create or remove points or ranges that are mapped to the shards: use CreateRangeMapping (Java,
.NET), DeleteMapping (Java, .NET) of the RangeShardMapping (Java, .NET) class, and
CreatePointMapping (Java, .NET) of the ListShardMap (Java, .NET) class.
Many different points or ranges can be mapped to the same shard. These methods only affect metadata -
they do not affect any data that may already be present in shards. If data needs to be removed from the
database in order to be consistent with DeleteMapping operations, you perform those operations
separately but in conjunction with using these methods.
To split existing ranges into two, or merge adjacent ranges into one: use SplitMapping (Java, .NET) and
MergeMappings (Java, .NET).
Note that split and merge operations do not change the shard to which key values are mapped . A
split breaks an existing range into two parts, but leaves both as mapped to the same shard. A merge
operates on two adjacent ranges that are already mapped to the same shard, coalescing them into a
single range. The movement of points or ranges themselves between shards needs to be coordinated by
using UpdateMapping in conjunction with actual data movement. You can use the Split/Merge service
that is part of elastic database tools to coordinate shard map changes with data movement, when
movement is needed.
To re-map (or move) individual points or ranges to different shards: use UpdateMapping (Java, .NET).
Since data may need to be moved from one shard to another in order to be consistent with
UpdateMapping operations, you need to perform that movement separately but in conjunction with
using these methods.
To take mappings online and offline: use MarkMappingOffline (Java, .NET) and MarkMappingOnline
(Java, .NET) to control the online state of a mapping.
Certain operations on shard mappings are only allowed when a mapping is in an “offline” state, including
UpdateMapping and DeleteMapping . When a mapping is offline, a data-dependent request based on
a key included in that mapping returns an error. In addition, when a range is first taken offline, all
connections to the affected shard are automatically killed in order to prevent inconsistent or incomplete
results for queries directed against ranges being changed.
Mappings are immutable objects in .NET. All of the methods above that change mappings also invalidate any
references to them in your code. To make it easier to perform sequences of operations that change a mapping’s
state, all of the methods that change a mapping return a new mapping reference, so operations can be chained.
For example, to delete an existing mapping in shardmap sm that contains the key 25, you can execute the
following:

sm.DeleteMapping(sm.MarkMappingOffline(sm.GetMappingForKey(25)));

Adding a shard
Applications often need to add new shards to handle data that is expected from new keys or key ranges, for a
shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new
shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each
new month.
If the new range of key values is not already part of an existing mapping and no data movement is necessary, it
is simple to add the new shard and associate the new key or range to that shard. For details on adding new
shards, see Adding a new shard.
For scenarios that require data movement, however, the split-merge tool is needed to orchestrate the data
movement between shards in combination with the necessary shard map updates. For details on using the split-
merge tool, see Overview of split-merge

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Use data-dependent routing to route a query to an
appropriate database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Data-dependent routing is the ability to use the data in a query to route the request to an appropriate
database. Data-dependent routing is a fundamental pattern when working with sharded databases. The request
context may also be used to route the request, especially if the sharding key is not part of the query. Each
specific query or transaction in an application using data-dependent routing is restricted to accessing one
database per request. For the Azure SQL Database elastic tools, this routing is accomplished with the
ShardMapManager (Java, .NET) class.
The application does not need to track various connection strings or DB locations associated with different slices
of data in the sharded environment. Instead, the Shard Map Manager opens connections to the correct
databases when needed, based on the data in the shard map and the value of the sharding key that is the target
of the application’s request. The key is typically the customer_id, tenant_id, date_key, or some other specific
identifier that is a fundamental parameter of the database request.
For more information, see Scaling Out SQL Server with Data-Dependent Routing.

Download the client library


To download:
The Java version of the library, see Maven Central Repository.
The .NET version of the library, see NuGet.

Using a ShardMapManager in a data-dependent routing application


Applications should instantiate the ShardMapManager during initialization, using the factory call
GetSQLShardMapManager (Java, .NET). In this example, both a ShardMapManager and a specific
ShardMap that it contains are initialized. This example shows the GetSqlShardMapManager and
GetRangeShardMap (Java, .NET) methods.

ShardMapManager smm = ShardMapManagerFactory.getSqlShardMapManager(connectionString,


ShardMapManagerLoadPolicy.Lazy);
RangeShardMap<int> rangeShardMap = smm.getRangeShardMap(Configuration.getRangeShardMapName(),
ShardKeyType.Int32);

ShardMapManager smm = ShardMapManagerFactory.GetSqlShardMapManager(smmConnectionString,


ShardMapManagerLoadPolicy.Lazy);
RangeShardMap<int> customerShardMap = smm.GetRangeShardMap<int>("customerMap");

Use lowest privilege credentials possible for getting the shard map
If an application is not manipulating the shard map itself, the credentials used in the factory method should have
read-only permissions on the Global Shard Map database. These credentials are typically different from
credentials used to open connections to the shard map manager. See also Credentials used to access the Elastic
Database client library.
Call the OpenConnectionForKey method
The ShardMap.OpenConnectionForKey method (Java, .NET) returns a connection ready for issuing
commands to the appropriate database based on the value of the key parameter. Shard information is cached in
the application by the ShardMapManager , so these requests do not typically involve a database lookup
against the Global Shard Map database.

// Syntax:
public Connection openConnectionForKey(Object key, String connectionString, ConnectionOptions options)

// Syntax:
public SqlConnection OpenConnectionForKey<TKey>(TKey key, string connectionString, ConnectionOptions
options)

The key parameter is used as a lookup key into the shard map to determine the appropriate database for the
request.
The connectionString is used to pass only the user credentials for the desired connection. No database
name or server name is included in this connectionString since the method determines the database and
server using the ShardMap .
The connectionOptions (Java, .NET) should be set to ConnectionOptions.Validate if an environment
where shard maps may change and rows may move to other databases as a result of split or merge
operations. This validation involves a brief query to the local shard map on the target database (not to the
global shard map) before the connection is delivered to the application.
If the validation against the local shard map fails (indicating that the cache is incorrect), the Shard Map Manager
queries the global shard map to obtain the new correct value for the lookup, update the cache, and obtain and
return the appropriate database connection.
Use ConnectionOptions.None only when shard mapping changes are not expected while an application is
online. In that case, the cached values can be assumed to always be correct, and the extra round-trip validation
call to the target database can be safely skipped. That reduces database traffic. The connectionOptions may
also be set via a value in a configuration file to indicate whether sharding changes are expected or not during a
period of time.
This example uses the value of an integer key CustomerID , using a ShardMap object named
customerShardMap .

int customerId = 12345;


int productId = 4321;
// Looks up the key in the shard map and opens a connection to the shard
try (Connection conn = shardMap.openConnectionForKey(customerId,
Configuration.getCredentialsConnectionString())) {
// Create a simple command that will insert or update the customer information
PreparedStatement ps = conn.prepareStatement("UPDATE Sales.Customer SET PersonID = ? WHERE CustomerID =
?");

ps.setInt(1, productId);
ps.setInt(2, customerId);
ps.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
int customerId = 12345;
int newPersonId = 4321;

// Connect to the shard for that customer ID. No need to call a SqlConnection
// constructor followed by the Open method.
using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId,
Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate))
{
// Execute a simple command.
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = @"UPDATE Sales.Customer
SET PersonID = @newPersonID WHERE CustomerID = @customerID";

cmd.Parameters.AddWithValue("@customerID", customerId);cmd.Parameters.AddWithValue("@newPersonID",
newPersonId);
cmd.ExecuteNonQuery();
}

The OpenConnectionForKey method returns a new already-open connection to the correct database.
Connections utilized in this way still take full advantage of connection pooling.
The OpenConnectionForKeyAsync method (Java, .NET) is also available if your application makes use
asynchronous programming.

Integrating with transient fault handling


A best practice in developing data access applications in the cloud is to ensure that transient faults are caught by
the app, and that the operations are retried several times before throwing an error. Transient fault handling for
cloud applications is discussed at Transient Fault Handling (Java, .NET).
Transient fault handling can coexist naturally with the Data-Dependent Routing pattern. The key requirement is
to retry the entire data access request including the using block that obtained the data-dependent routing
connection. The preceding example could be rewritten as follows.
Example - data-dependent routing with transient fault handling

int customerId = 12345;


int productId = 4321;
try {
SqlDatabaseUtils.getSqlRetryPolicy().executeAction(() -> {
// Looks up the key in the shard map and opens a connection to the shard
try (Connection conn = shardMap.openConnectionForKey(customerId,
Configuration.getCredentialsConnectionString())) {
// Create a simple command that will insert or update the customer information
PreparedStatement ps = conn.prepareStatement("UPDATE Sales.Customer SET PersonID = ? WHERE
CustomerID = ?");

ps.setInt(1, productId);
ps.setInt(2, customerId);
ps.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
});
} catch (Exception e) {
throw new StoreException(e.getMessage(), e);
}
int customerId = 12345;
int newPersonId = 4321;

Configuration.SqlRetryPolicy.ExecuteAction(() -> {

// Connect to the shard for a customer ID.


using (SqlConnection conn = customerShardMap.OpenConnectionForKey(customerId,
Configuration.GetCredentialsConnectionString(), ConnectionOptions.Validate))
{
// Execute a simple command
SqlCommand cmd = conn.CreateCommand();

cmd.CommandText = @"UPDATE Sales.Customer


SET PersonID = @newPersonID
WHERE CustomerID = @customerID";

cmd.Parameters.AddWithValue("@customerID", customerId);
cmd.Parameters.AddWithValue("@newPersonID", newPersonId);
cmd.ExecuteNonQuery();

Console.WriteLine("Update completed");
}
});

Packages necessary to implement transient fault handling are downloaded automatically when you build the
elastic database sample application.

Transactional consistency
Transactional properties are guaranteed for all operations local to a shard. For example, transactions submitted
through data-dependent routing execute within the scope of the target shard for the connection. At this time,
there are no capabilities provided for enlisting multiple connections into a transaction, and therefore there are
no transactional guarantees for operations performed across shards.

Next steps
To detach a shard, or to reattach a shard, see Using the RecoveryManager class to fix shard map problems.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Credentials used to access the Elastic Database
client library
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Elastic Database client library uses three different kinds of credentials to access the shard map manager.
Depending on the need, use the credential with the lowest level of access possible.
Management credentials : for creating or manipulating a shard map manager. (See the glossary.)
Access credentials : to access an existing shard map manager to obtain information about shards.
Connection credentials : to connect to shards.
See also Managing databases and logins in Azure SQL Database.

About management credentials


Management credentials are used to create a ShardMapManager (Java, .NET) object for applications that
manipulate shard maps. (For example, see Adding a shard using Elastic Database tools and data-dependent
routing). The user of the elastic scale client library creates the SQL users and SQL logins and makes sure each is
granted the read/write permissions on the global shard map database and all shard databases as well. These
credentials are used to maintain the global shard map and the local shard maps when changes to the shard map
are performed. For instance, use the management credentials to create the shard map manager object (using
GetSqlShardMapManager (Java, .NET):

// Obtain a shard map manager.


ShardMapManager shardMapManager =
ShardMapManagerFactory.GetSqlShardMapManager(smmAdminConnectionString,ShardMapManagerLoadPolicy.Lazy);

The variable smmAdminConnectionString is a connection string that contains the management credentials.
The user ID and password provide read/write access to both shard map database and individual shards. The
management connection string also includes the server name and database name to identify the global shard
map database. Here is a typical connection string for that purpose:

"Server=<yourserver>.database.windows.net;Database=<yourdatabase>;User ID=<yourmgmtusername>;Password=
<yourmgmtpassword>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;”

Do not use values in the form of "username@server"—instead just use the "username" value. This is because
credentials must work against both the shard map manager database and individual shards, which may be on
different servers.

Access credentials
When creating a shard map manager in an application that does not administer shard maps, use credentials that
have read-only permissions on the global shard map. The information retrieved from the global shard map
under these credentials is used for data-dependent routing and to populate the shard map cache on the client.
The credentials are provided through the same call pattern to GetSqlShardMapManager :
// Obtain shard map manager.
ShardMapManager shardMapManager = ShardMapManagerFactory.GetSqlShardMapManager(smmReadOnlyConnectionString,
ShardMapManagerLoadPolicy.Lazy);

Note the use of the smmReadOnlyConnectionString to reflect the use of different credentials for this access
on behalf of non-admin users: these credentials should not provide write permissions on the global shard map.

Connection credentials
Additional credentials are needed when using the OpenConnectionForKey (Java, .NET) method to access a
shard associated with a sharding key. These credentials need to provide permissions for read-only access to the
local shard map tables residing on the shard. This is needed to perform connection validation for data-
dependent routing on the shard. This code snippet allows data access in the context of data-dependent routing:

using (SqlConnection conn = rangeMap.OpenConnectionForKey<int>(targetWarehouse, smmUserConnectionString,


ConnectionOptions.Validate))

In this example, smmUserConnectionString holds the connection string for the user credentials. For Azure
SQL Database, here is a typical connection string for user credentials:

"User ID=<yourusername>; Password=<youruserpassword>; Trusted_Connection=False; Encrypt=True; Connection


Timeout=30;”

As with the admin credentials, do not use values in the form of "username@server". Instead, just use
"username". Also note that the connection string does not contain a server name and database name. That is
because the OpenConnectionForKey call automatically directs the connection to the correct shard based on
the key. Hence, the database name and server name are not provided.

See also
Managing databases and logins in Azure SQL Database
Securing your SQL Database
Elastic Database jobs

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Moving data between scaled-out cloud databases
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


If you are a Software as a Service developer, and suddenly your app undergoes tremendous demand, you need
to accommodate the growth. So you add more databases (shards). How do you redistribute the data to the new
databases without disrupting the data integrity? Use the split-merge tool to move data from constrained
databases to the new databases.
The split-merge tool runs as an Azure web service. An administrator or developer uses the tool to move
shardlets (data from a shard) between different databases (shards). The tool uses shard map management to
maintain the service metadata database, and ensure consistent mappings.

Download
Microsoft.Azure.SqlDatabase.ElasticScale.Service.SplitMerge

Documentation
1. Elastic database split-merge tool tutorial
2. Split-merge security configuration
3. Split-merge security considerations
4. Shard map management
5. Migrate existing databases to scale-out
6. Elastic database tools
7. Elastic database tools glossary

Why use the split-merge tool


Flexibility
Applications need to stretch flexibly beyond the limits of a single database in Azure SQL Database. Use
the tool to move data as needed to new databases while retaining integrity.
Split to grow
To increase overall capacity to handle explosive growth, create additional capacity by sharding the data
and by distributing it across incrementally more databases until capacity needs are fulfilled. This is a
prime example of the split feature.
Merge to shrink
Capacity needs shrink due to the seasonal nature of a business. The tool lets you scale down to fewer
scale units when business slows. The ‘merge’ feature in the Elastic Scale Split-Merge Service covers this
requirement.
Manage hotspots by moving shardlets
With multiple tenants per database, the allocation of shardlets to shards can lead to capacity bottlenecks
on some shards. This requires re-allocating shardlets or moving busy shardlets to new or less utilized
shards.

Concepts & key features


Customer-hosted ser vices
The split-merge is delivered as a customer-hosted service. You must deploy and host the service in your
Microsoft Azure subscription. The package you download from NuGet contains a configuration template
to complete with the information for your specific deployment. See the split-merge tutorial for details.
Since the service runs in your Azure subscription, you can control and configure most security aspects of
the service. The default template includes the options to configure TLS, certificate-based client
authentication, encryption for stored credentials, DoS guarding and IP restrictions. You can find more
information on the security aspects in the following document split-merge security configuration.
The default deployed service runs with one worker and one web role. Each uses the A1 VM size in Azure
Cloud Services. While you cannot modify these settings when deploying the package, you could change
them after a successful deployment in the running cloud service, (through the Azure portal). Note that
the worker role must not be configured for more than a single instance for technical reasons.
Shard map integration
The split-merge service interacts with the shard map of the application. When using the split-merge
service to split or merge ranges or to move shardlets between shards, the service automatically keeps the
shard map up-to-date. To do so, the service connects to the shard map manager database of the
application and maintains ranges and mappings as split/merge/move requests progress. This ensures
that the shard map always presents an up-to-date view when split-merge operations are going on. Split,
merge and shardlet movement operations are implemented by moving a batch of shardlets from the
source shard to the target shard. During the shardlet movement operation the shardlets subject to the
current batch are marked as offline in the shard map and are unavailable for data-dependent routing
connections using the OpenConnectionForKey API.
Consistent shardlet connections
When data movement starts for a new batch of shardlets, any shard-map provided data-dependent
routing connections to the shard storing the shardlet are killed and subsequent connections from the
shard map APIs to the shardlets are blocked while the data movement is in progress in order to avoid
inconsistencies. Connections to other shardlets on the same shard will also get killed, but will succeed
again immediately on retry. Once the batch is moved, the shardlets are marked online again for the target
shard and the source data is removed from the source shard. The service goes through these steps for
every batch until all shardlets have been moved. This will lead to several connection kill operations
during the course of the complete split/merge/move operation.
Managing shardlet availability
Limiting the connection killing to the current batch of shardlets as discussed above restricts the scope of
unavailability to one batch of shardlets at a time. This is preferred over an approach where the complete
shard would remain offline for all its shardlets during the course of a split or merge operation. The size of
a batch, defined as the number of distinct shardlets to move at a time, is a configuration parameter. It can
be defined for each split and merge operation depending on the application’s availability and
performance needs. Note that the range that is being locked in the shard map may be larger than the
batch size specified. This is because the service picks the range size such that the actual number of
sharding key values in the data approximately matches the batch size. This is important to remember in
particular for sparsely populated sharding keys.
Metadata storage
The split-merge service uses a database to maintain its status and to keep logs during request processing.
The user creates this database in their subscription and provides the connection string for it in the
configuration file for the service deployment. Administrators from the user’s organization can also
connect to this database to review request progress and to investigate detailed information regarding
potential failures.
Sharding-awareness
The split-merge service differentiates between (1) sharded tables, (2) reference tables, and (3) normal
tables. The semantics of a split/merge/move operation depend on the type of the table used and are
defined as follows:
Sharded tables
Split, merge, and move operations move shardlets from source to target shard. After successful
completion of the overall request, those shardlets are no longer present on the source. Note that
the target tables need to exist on the target shard and must not contain data in the target range
prior to processing of the operation.
Reference tables
For reference tables, the split, merge and move operations copy the data from the source to the
target shard. Note, however, that no changes occur on the target shard for a given table if any row
is already present in this table on the target. The table has to be empty for any reference table copy
operation to get processed.
Other tables
Other tables can be present on either the source or the target of a split and merge operation. The
split-merge service disregards these tables for any data movement or copy operations. Note,
however, that they can interfere with these operations in case of constraints.
The information on reference vs. sharded tables is provided by the SchemaInfo APIs on the shard
map. The following example illustrates the use of these APIs on a given shard map manager object:
// Create the schema annotations
SchemaInfo schemaInfo = new SchemaInfo();

// reference tables
schemaInfo.Add(new ReferenceTableInfo("dbo", "region"));
schemaInfo.Add(new ReferenceTableInfo("dbo", "nation"));

// sharded tables
schemaInfo.Add(new ShardedTableInfo("dbo", "customer", "C_CUSTKEY"));
schemaInfo.Add(new ShardedTableInfo("dbo", "orders", "O_CUSTKEY"));

// publish
smm.GetSchemaInfoCollection().Add(Configuration.ShardMapName, schemaInfo);

The tables ‘region’ and ‘nation’ are defined as reference tables and will be copied with
split/merge/move operations. ‘customer’ and ‘orders’ in turn are defined as sharded tables.
C_CUSTKEY and O_CUSTKEY serve as the sharding key.

Referential integrity
The split-merge service analyzes dependencies between tables and uses foreign key-primary key
relationships to stage the operations for moving reference tables and shardlets. In general, reference
tables are copied first in dependency order, then shardlets are copied in order of their dependencies
within each batch. This is necessary so that FK-PK constraints on the target shard are honored as the new
data arrives.
Shard map consistency and eventual completion
In the presence of failures, the split-merge service resumes operations after any outage and aims to
complete any in progress requests. However, there may be unrecoverable situations, e.g., when the target
shard is lost or compromised beyond repair. Under those circumstances, some shardlets that were
supposed to be moved may continue to reside on the source shard. The service ensures that shardlet
mappings are only updated after the necessary data has been successfully copied to the target. Shardlets
are only deleted on the source once all their data has been copied to the target and the corresponding
mappings have been updated successfully. The deletion operation happens in the background while the
range is already online on the target shard. The split-merge service always ensures correctness of the
mappings stored in the shard map.

The split-merge user interface


The split-merge service package includes a worker role and a web role. The web role is used to submit split-
merge requests in an interactive way. The main components of the user interface are as follows:
Operation type
The operation type is a radio button that controls the kind of operation performed by the service for this
request. You can choose between the split, merge and move scenarios. You can also cancel a previously
submitted operation. You can use split, merge and move requests for range shard maps. List shard maps
only support move operations.
Shard map
The next section of request parameters covers information about the shard map and the database
hosting your shard map. In particular, you need to provide the name of the server and database hosting
the shardmap, credentials to connect to the shard map database, and finally the name of the shard map.
Currently, the operation only accepts a single set of credentials. These credentials need to have sufficient
permissions to perform changes to the shard map as well as to the user data on the shards.
Source range (split and merge)
A split and merge operation processes a range using its low and high key. To specify an operation with an
unbounded high key value, check the “High key is max” check box and leave the high key field empty. The
range key values that you specify do not need to precisely match a mapping and its boundaries in your
shard map. If you do not specify any range boundaries at all the service will infer the closest range for
you automatically. You can use the GetMappings.ps1 PowerShell script to retrieve the current mappings
in a given shard map.
Split source behavior (split)
For split operations, define the point to split the source range. You do this by providing the sharding key
where you want the split to occur. Use the radio button specify whether you want the lower part of the
range (excluding the split key) to move, or whether you want the upper part to move (including the split
key).
Source shardlet (move)
Move operations are different from split or merge operations as they do not require a range to describe
the source. A source for move is simply identified by the sharding key value that you plan to move.
Target shard (split)
Once you have provided the information on the source of your split operation, you need to define where
you want the data to be copied to by providing the server and database name for the target.
Target range (merge)
Merge operations move shardlets to an existing shard. You identify the existing shard by providing the
range boundaries of the existing range that you want to merge with.
Batch size
The batch size controls the number of shardlets that will go offline at a time during the data movement.
This is an integer value where you can use smaller values when you are sensitive to long periods of
downtime for shardlets. Larger values will increase the time that a given shardlet is offline but may
improve performance.
Operation ID (cancel)
If you have an ongoing operation that is no longer needed, you can cancel the operation by providing its
operation ID in this field. You can retrieve the operation ID from the request status table (see Section 8.1)
or from the output in the web browser where you submitted the request.

Requirements and limitations


The current implementation of the split-merge service is subject to the following requirements and limitations:
The shards need to exist and be registered in the shard map before a split-merge operation on these shards
can be performed.
The service does not create tables or any other database objects automatically as part of its operations. This
means that the schema for all sharded tables and reference tables needs to exist on the target shard prior to
any split/merge/move operation. Sharded tables in particular are required to be empty in the range where
new shardlets are to be added by a split/merge/move operation. Otherwise, the operation will fail the initial
consistency check on the target shard. Also note that reference data is only copied if the reference table is
empty and that there are no consistency guarantees with regard to other concurrent write operations on the
reference tables. We recommend this: when running split/merge operations, no other write operations make
changes to the reference tables.
The service relies on row identity established by a unique index or key that includes the sharding key to
improve performance and reliability for large shardlets. This allows the service to move data at an even finer
granularity than just the sharding key value. This helps to reduce the maximum amount of log space and
locks that are required during the operation. Consider creating a unique index or a primary key including the
sharding key on a given table if you want to use that table with split/merge/move requests. For performance
reasons, the sharding key should be the leading column in the key or the index.
During the course of request processing, some shardlet data may be present both on the source and the
target shard. This is necessary to protect against failures during the shardlet movement. The integration of
split-merge with the shard map ensures that connections through the data-dependent routing APIs using the
OpenConnectionForKey method on the shard map do not see any inconsistent intermediate states.
However, when connecting to the source or the target shards without using the OpenConnectionForKey
method, inconsistent intermediate states might be visible when split/merge/move requests are going on.
These connections may show partial or duplicate results depending on the timing or the shard underlying
the connection. This limitation currently includes the connections made by Elastic Scale Multi-Shard-Queries.
The metadata database for the split-merge service must not be shared between different roles. For example,
a role of the split-merge service running in staging needs to point to a different metadata database than the
production role.

Billing
The split-merge service runs as a cloud service in your Microsoft Azure subscription. Therefore charges for
cloud services apply to your instance of the service. Unless you frequently perform split/merge/move
operations, we recommend you delete your split-merge cloud service. That saves costs for running or deployed
cloud service instances. You can re-deploy and start your readily runnable configuration whenever you need to
perform split or merge operations.

Monitoring
Status tables
The split-merge Service provides the RequestStatus table in the metadata store database for monitoring of
completed and ongoing requests. The table lists a row for each split-merge request that has been submitted to
this instance of the split-merge service. It gives the following information for each request:
Timestamp
The time and date when the request was started.
OperationId
A GUID that uniquely identifies the request. This request can also be used to cancel the operation while it
is still ongoing.
Status
The current state of the request. For ongoing requests, it also lists the current phase in which the request
is.
CancelRequest
A flag that indicates whether the request has been canceled.
Progress
A percentage estimate of completion for the operation. A value of 50 indicates that the operation is
approximately 50% complete.
Details
An XML value that provides a more detailed progress report. The progress report is periodically updated
as sets of rows are copied from source to target. In case of failures or exceptions, this column also
includes more detailed information about the failure.
Azure Diagnostics
The split-merge service uses Azure Diagnostics based on Azure SDK 2.5 for monitoring and diagnostics. You
control the diagnostics configuration as explained here: Enabling Diagnostics in Azure Cloud Services and
Virtual Machines. The download package includes two diagnostics configurations - one for the web role and one
for the worker role. It includes the definitions to log Performance Counters, IIS logs, Windows Event Logs, and
split-merge application event logs.

Deploy Diagnostics
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported, but all future development is for the Az.Sql module.
For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the AzureRm modules are
substantially identical.

To enable monitoring and diagnostics using the diagnostic configuration for the web and worker roles provided
by the NuGet package, run the following commands using Azure PowerShell:

$storageName = "<azureStorageAccount>"
$key = "<azureStorageAccountKey"
$storageContext = New-AzStorageContext -StorageAccountName $storageName -StorageAccountKey $key
$configPath = "<filePath>\SplitMergeWebContent.diagnostics.xml"
$serviceName = "<cloudServiceName>"

Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext `


-DiagnosticsConfigurationPath $configPath -ServiceName $serviceName `
-Slot Production -Role "SplitMergeWeb"

Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext `


-DiagnosticsConfigurationPath $configPath -ServiceName $serviceName `
-Slot Production -Role "SplitMergeWorker"

You can find more information on how to configure and deploy diagnostics settings here: Enabling Diagnostics
in Azure Cloud Services and Virtual Machines.

Retrieve diagnostics
You can easily access your diagnostics from the Visual Studio Server Explorer in the Azure part of the Server
Explorer tree. Open a Visual Studio instance, and in the menu bar click View, and Server Explorer. Click the Azure
icon to connect to your Azure subscription. Then navigate to Azure -> Storage -> <your storage account> ->
Tables -> WADLogsTable. For more information, see Server Explorer.
The WADLogsTable highlighted in the figure above contains the detailed events from the split-merge service’s
application log. Note that the default configuration of the downloaded package is geared towards a production
deployment. Therefore the interval at which logs and counters are pulled from the service instances is large (5
minutes). For test and development, lower the interval by adjusting the diagnostics settings of the web or the
worker role to your needs. Right-click on the role in the Visual Studio Server Explorer (see above) and then
adjust the Transfer Period in the dialog for the Diagnostics configuration settings:

Performance
In general, better performance is to be expected from higher, more performant service tiers. Higher IO, CPU and
memory allocations for the higher service tiers benefit the bulk copy and delete operations that the split-merge
service uses. For that reason, increase the service tier just for those databases for a defined, limited period of
time.
The service also performs validation queries as part of its normal operations. These validation queries check for
unexpected presence of data in the target range and ensure that any split/merge/move operation starts from a
consistent state. These queries all work over sharding key ranges defined by the scope of the operation and the
batch size provided as part of the request definition. These queries perform best when an index is present that
has the sharding key as the leading column.
In addition, a uniqueness property with the sharding key as the leading column will allow the service to use an
optimized approach that limits resource consumption in terms of log space and memory. This uniqueness
property is required to move large data sizes (typically above 1GB).

How to upgrade
1. Follow the steps in Deploy a split-merge service.
2. Change your cloud service configuration file for your split-merge deployment to reflect the new
configuration parameters. A new required parameter is the information about the certificate used for
encryption. An easy way to do this is to compare the new configuration template file from the download
against your existing configuration. Make sure you add the settings for
“DataEncryptionPrimaryCertificateThumbprint” and “DataEncryptionPrimary” for both the web and the
worker role.
3. Before deploying the update to Azure, ensure that all currently running split-merge operations have finished.
You can easily do this by querying the RequestStatus and PendingWorkflows tables in the split-merge
metadata database for ongoing requests.
4. Update your existing cloud service deployment for split-merge in your Azure subscription with the new
package and your updated service configuration file.
You do not need to provision a new metadata database for split-merge to upgrade. The new version will
automatically upgrade your existing metadata database to the new version.

Best practices & troubleshooting


Define a test tenant and exercise your most important split/merge/move operations with the test tenant
across several shards. Ensure that all metadata is defined correctly in your shard map and that the operations
do not violate constraints or foreign keys.
Keep the test tenant data size above the maximum data size of your largest tenant to ensure you are not
encountering data size related issues. This helps you assess an upper bound on the time it takes to move a
single tenant around.
Make sure that your schema allows deletions. The split-merge service requires the ability to remove data
from the source shard once the data has been successfully copied to the target. For example, delete
triggers can prevent the service from deleting the data on the source and may cause operations to fail.
The sharding key should be the leading column in your primary key or unique index definition. That ensures
the best performance for the split or merge validation queries, and for the actual data movement and
deletion operations which always operate on sharding key ranges.
Collocate your split-merge service in the region and data center where your databases reside.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Elastic Database tools glossary
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The following terms are defined for the Elastic Database tools. The tools are used to manage shard maps, and
include the client library, the split-merge tool, elastic pools, and queries.
These terms are used in Adding a shard using Elastic Database tools and Using the RecoveryManager class to fix
shard map problems.

Database : A database in Azure SQL Database.


Data dependent routing : The functionality that enables an application to connect to a shard given a specific
sharding key. See Data dependent routing. Compare to Multi-Shard Quer y .
Global shard map : The map between sharding keys and their respective shards within a shard set . The global
shard map is stored in the shard map manager . Compare to local shard map .
List shard map : A shard map in which sharding keys are mapped individually. Compare to Range Shard
Map .
Local shard map : Stored on a shard, the local shard map contains mappings for the shardlets that reside on
the shard.
Multi-shard quer y : The ability to issue a query against multiple shards; results sets are returned using UNION
ALL semantics (also known as “fan-out query”). Compare to data dependent routing .
Multi-tenant and Single-tenant : This shows a single-tenant database and a multi-tenant database:
Here is a representation of sharded single and multi-tenant databases.

Range shard map : A shard map in which the shard distribution strategy is based on multiple ranges of
contiguous values.
Reference tables : Tables that are not sharded but are replicated across shards. For example, zip codes can be
stored in a reference table.
Shard : A database in Azure SQL Database that stores data from a sharded data set.
Shard elasticity : The ability to perform both horizontal scaling and ver tical scaling .
Sharded tables : Tables that are sharded, i.e., whose data is distributed across shards based on their sharding
key values.
Sharding key : A column value that determines how data is distributed across shards. The value type can be
one of the following: int , bigint , varbinar y , or uniqueidentifier .
Shard set : The collection of shards that are attributed to the same shard map in the shard map manager.
Shardlet : All of the data associated with a single value of a sharding key on a shard. A shardlet is the smallest
unit of data movement possible when redistributing sharded tables.
Shard map : The set of mappings between sharding keys and their respective shards.
Shard map manager : A management object and data store that contains the shard map(s), shard locations,
and mappings for one or more shard sets.
Verbs
Horizontal scaling : The act of scaling out (or in) a collection of shards by adding or removing shards to a
shard map, as shown below.

Merge : The act of moving shardlets from two shards to one shard and updating the shard map accordingly.
Shardlet move : The act of moving a single shardlet to a different shard.
Shard : The act of horizontally partitioning identically structured data across multiple databases based on a
sharding key.
Split : The act of moving several shardlets from one shard to another (typically new) shard. A sharding key is
provided by the user as the split point.
Ver tical Scaling : The act of scaling up (or down) the compute size of an individual shard. For example,
changing a shard from Standard to Premium (which results in more computing resources).

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Resource management in Azure SQL Database
7/12/2022 • 20 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides an overview of resource management in Azure SQL Database. It provides information on
what happens when resource limits are reached, and describes resource governance mechanisms that are used
to enforce these limits.
For specific resource limits per pricing tier (also known as service objective) for single databases, refer to either
DTU-based single database resource limits or vCore-based single database resource limits. For elastic pool
resource limits, refer to either DTU-based elastic pool resource limits or vCore-based elastic pool resource limits.

TIP
For Azure Synapse Analytics dedicated SQL pool limits, see capacity limits and memory and concurrency limits.

Logical server limits


RESO URC E L IM IT

Databases per logical server 5000

Default number of logical servers per subscription in a 20


region

Max number of logical servers per subscription in a region 250

DTU / eDTU quota per logical server 54,000

vCore quota per logical server 540

Max elastic pools per logical server Limited by number of DTUs or vCores. For example, if each
pool is 1000 DTUs, then a server can support 54 pools.

IMPORTANT
As the number of databases approaches the limit per logical server, the following can occur:
Increasing latency in running queries against the master database. This includes views of resource utilization statistics
such as sys.resource_stats .
Increasing latency in management operations and rendering portal viewpoints that involve enumerating databases in
the server.

NOTE
To obtain more DTU/eDTU quota, vCore quota, or more logical servers than the default number, submit a new support
request in the Azure portal. For more information, see Request quota increases for Azure SQL Database.
What happens when resource limits are reached
Compute CPU
When database compute CPU utilization becomes high, query latency increases, and queries can even time out.
Under these conditions, queries may be queued by the service and are provided resources for execution as
resources become free. When encountering high compute utilization, mitigation options include:
Increasing the compute size of the database or elastic pool to provide the database with more compute
resources. See Scale single database resources and Scale elastic pool resources.
Optimizing queries to reduce CPU resource utilization of each query. For more information, see Query
Tuning/Hinting.
Storage
When data space used reaches the maximum data size limit, either at the database level or at the elastic pool
level, inserts and updates that increase data size fail and clients receive an error message. SELECT and DELETE
statements remain unaffected.
In Premium and Business Critical service tiers, clients also receive an error message if combined storage
consumption by data, transaction log, and tempdb for a single database or an elastic pool exceeds maximum
local storage size. For more information, see Storage space governance.
When encountering high space utilization, mitigation options include:
Increase maximum data size of the database or elastic pool, or scale up to a service objective with a higher
maximum data size limit. See Scale single database resources and Scale elastic pool resources.
If the database is in an elastic pool, then alternatively the database can be moved outside of the pool, so that
its storage space isn't shared with other databases.
Shrink a database to reclaim unused space. In elastic pools, shrinking a database provides more storage for
other databases in the pool. For more information, see Manage file space in Azure SQL Database.
Check if high space utilization is due to a spike in the size of Persistent Version Store (PVS). PVS is a part of
each database, and is used to implement Accelerated Database Recovery. To determine current PVS size, see
PVS troubleshooting. A common reason for large PVS size is a transaction that is open for a long time
(hours), preventing cleanup of row older versions in PVS.
For databases and elastic pools in Premium and Business Critical service tiers that consume large amounts of
storage, you may receive an out-of-space error even though used space in the database or elastic pool is
below its maximum data size limit. This may happen if tempdb or transaction log files consume a large
amount of storage toward the maximum local storage limit. Fail over the database or elastic pool to reset
tempdb to its initial smaller size, or shrink transaction log to reduce local storage consumption.

Sessions, workers, and requests


Sessions, workers, and requests are defined as follows:
A session represents a process connected to the database engine.
A request is the logical representation of a query or batch. A request is issued by a client connected to a
session. Over time, multiple requests may be issued on the same session.
A worker thread, also known as a worker or thread, is a logical representation of an operating system thread.
A request may have many workers when executed with a parallel query execution plan, or a single worker
when executed with a serial (single threaded) execution plan. Workers are also required to support activities
outside of requests: for example, a worker is required to process a login request as a session connects.
For more information about these concepts, see the Thread and Task Architecture Guide.
The maximum numbers of sessions and workers are determined by the service tier and compute size. New
requests are rejected when session or worker limits are reached, and clients receive an error message. While the
number of connections can be controlled by the application, the number of concurrent workers is often harder
to estimate and control. This is especially true during peak load periods when database resource limits are
reached and workers pile up due to longer running queries, large blocking chains, or excessive query
parallelism.

NOTE
The initial offering of Azure SQL Database supported only single threaded queries. At that time, the number of requests
was always equivalent to the number of workers. Error message 10928 in Azure SQL Database contains the wording "The
request limit for the database is N and has been reached" for backwards compatibility purposes. The limit reached is
actually the number of workers. If your max degree of parallelism (MAXDOP) setting is equal to zero or is greater than
one, the number of workers may be much higher than the number of requests, and the limit may be reached much
sooner than when MAXDOP is equal to one. Learn more about error 10928 in Resource governance errors.

You can mitigate approaching or hitting worker or session limits by:


Increasing the service tier or compute size of the database or elastic pool. See Scale single database
resources and Scale elastic pool resources.
Optimizing queries to reduce resource utilization if the cause of increased workers is contention for compute
resources. For more information, see Query Tuning/Hinting.
Optimizing the query workload to reduce the number of occurrences and duration of query blocking. For
more information, see Understand and resolve Azure SQL blocking problems.
Reducing the MAXDOP setting when appropriate.
Find worker and session limits for Azure SQL Database by service tier and compute size:
Resource limits for single databases using the vCore purchasing model
Resource limits for elastic pools using the vCore purchasing model
Resource limits for single databases using the DTU purchasing model
Resources limits for elastic pools using the DTU purchasing model
Learn more about troubleshooting specific errors for session or worker limits in Resource governance errors.
Memory
Unlike other resources (CPU, workers, storage), reaching the memory limit does not negatively impact query
performance, and does not cause errors and failures. As described in detail in Memory Management
Architecture Guide, the database engine often uses all available memory, by design. Memory is used primarily
for caching data, to avoid slower storage access. Thus, higher memory utilization usually improves query
performance due to faster reads from memory, rather than slower reads from storage.
After database engine startup, as the workload starts reading data from storage, the database engine
aggressively caches data in memory. After this initial ramp-up period, it is common and expected to see the
avg_memory_usage_percent and avg_instance_memory_percent columns in sys.dm_db_resource_stats to be close or
equal to 100%, particularly for databases that are not idle, and do not fully fit in memory.
Besides the data cache, memory is used in other components of the database engine. When there is demand for
memory and all available memory has been used by the data cache, the database engine will dynamically
reduce data cache size to make memory available to other components, and will dynamically grow data cache
when other components release memory.
In rare cases, a sufficiently demanding workload may cause an insufficient memory condition, leading to out-of-
memory errors. This can happen at any level of memory utilization between 0% and 100%. This is more likely to
occur on smaller compute sizes that have proportionally smaller memory limits, and/or with workloads using
more memory for query processing, such as in dense elastic pools.
When encountering out-of-memory errors, mitigation options include:
Review the details of the OOM condition in sys.dm_os_out_of_memory_events.
Increasing the service tier or compute size of the database or elastic pool. See Scale single database
resources and Scale elastic pool resources.
Optimizing queries and configuration to reduce memory utilization. Common solutions are described in the
following table.

SO L UT IO N DESC RIP T IO N

Reduce the size of memory grants For more information about memory grants, see the
Understanding SQL Server memory grant blog post. A
common solution for avoiding excessively large memory
grants is keeping statistics up to date. This results in more
accurate estimates of memory consumption by the query
engine, avoiding unnecessarily large memory grants.

By default, in databases using compatibility level 140 and


above, the database engine may automatically adjust
memory grant size using Batch mode memory grant
feedback. Similarly, in databases using compatibility level 150
and above, the database engine also uses Row mode
memory grant feedback, for more common row mode
queries. This built-in functionality helps avoid out-of-
memory errors due to unnecessarily large memory grants.

Reduce the size of query plan cache The database engine caches query plans in memory, to
avoid compiling a query plan for every query execution. To
avoid query plan cache bloat caused by caching plans that
are only used once, make sure to use parameterized queries,
and consider enabling
OPTIMIZE_FOR_AD_HOC_WORKLOADS database-scoped
configuration.

Reduce the size of lock memory The database engine uses memory for locks. When possible,
avoid large transactions that may acquire a large number of
locks and cause high lock memory consumption.

Resource consumption by user workloads and internal processes


Azure SQL Database requires compute resources to implement core service features such as high availability
and disaster recovery, database backup and restore, monitoring, Query Store, Automatic tuning, etc. The system
sets aside a certain limited portion of the overall resources for these internal processes using resource
governance mechanisms, making the remainder of resources available for user workloads. At times when
internal processes aren't using compute resources, the system makes them available to user workloads.
Total CPU and memory consumption by user workloads and internal processes is reported in the
sys.dm_db_resource_stats and sys.resource_stats views, in avg_instance_cpu_percent and
avg_instance_memory_percent columns. This data is also reported via the sqlserver_process_core_percent and
sqlserver_process_memory_percent Azure Monitor metrics, for single databases and elastic pools at the pool
level.
CPU and memory consumption by user workloads in each database is reported in the sys.dm_db_resource_stats
and sys.resource_stats views, in avg_cpu_percent and avg_memory_usage_percent columns. For elastic pools,
pool-level resource consumption is reported in the sys.elastic_pool_resource_stats view. User workload CPU
consumption is also reported via the cpu_percent Azure Monitor metric, for single databases and elastic pools
at the pool level.
A more detailed breakdown of recent resource consumption by user workloads and internal processes is
reported in the sys.dm_resource_governor_resource_pools_history_ex and
sys.dm_resource_governor_workload_groups_history_ex views. For details on resource pools and workload
groups referenced in these views, see Resource governance. These views report on resource utilization by user
workloads and specific internal processes in the associated resource pools and workload groups.
In the context of performance monitoring and troubleshooting, it's important to consider both user CPU
consumption ( avg_cpu_percent , cpu_percent ), and total CPU consumption by user workloads and internal
processes ( avg_instance_cpu_percent , sqlserver_process_core_percent ).
User CPU consumption is calculated as a percentage of the user workload limits in each service objective.
User CPU utilization at 100% indicates that the user workload has reached the limit of the service objective.
However, when total CPU consumption reaches the 70-100% range, it's possible to see user workload
throughput flattening out and query latency increasing, even if reported user CPU consumption remains
significantly below 100%. This is more likely to occur when using smaller service objectives with a moderate
allocation of compute resources, but relatively intense user workloads, such as in dense elastic pools. This can
also occur with smaller service objectives when internal processes temporarily require additional resources, for
example when creating a new replica of the database, or backing up the database.
When total CPU consumption is high, mitigation options are the same as noted in the Compute CPU section,
and include service objective increase and/or user workload optimization.

Resource governance
To enforce resource limits, Azure SQL Database uses a resource governance implementation that is based on
SQL Server Resource Governor, modified and extended to run in the cloud. In SQL Database, multiple resource
pools and workload groups, with resource limits set at both pool and group levels, provide a balanced
Database-as-a-Service. User workload and internal workloads are classified into separate resource pools and
workload groups. User workload on the primary and readable secondary replicas, including geo-replicas, is
classified into the SloSharedPool1 resource pool and UserPrimaryGroup.DBId[N] workload groups, where [N]
stands for the database ID value. In addition, there are multiple resource pools and workload groups for various
internal workloads.
In addition to using Resource Governor to govern resources within the database engine, Azure SQL Database
also uses Windows Job Objects for process level resource governance, and Windows File Server Resource
Manager (FSRM) for storage quota management.
Azure SQL Database resource governance is hierarchical in nature. From top to bottom, limits are enforced at
the OS level and at the storage volume level using operating system resource governance mechanisms and
Resource Governor, then at the resource pool level using Resource Governor, and then at the workload group
level using Resource Governor. Resource governance limits in effect for the current database or elastic pool are
reported in the sys.dm_user_db_resource_governance view.
Data I/O governance
Data I/O governance is a process in Azure SQL Database used to limit both read and write physical I/O against
data files of a database. IOPS limits are set for each service level to minimize the "noisy neighbor" effect, to
provide resource allocation fairness in a multi-tenant service, and to stay within the capabilities of the
underlying hardware and storage.
For single databases, workload group limits are applied to all storage I/O against the database. For elastic pools,
workload group limits apply to each database in the pool. Additionally, the resource pool limit additionally
applies to the cumulative I/O of the elastic pool. In tempdb , I/O is subject to workload group limits, with the
exception of Basic, Standard, and General Purpose service tier, where higher tempdb I/O limits apply. In general,
resource pool limits may not be achievable by the workload against a database (either single or pooled),
because workload group limits are lower than resource pool limits and limit IOPS/throughput sooner. However,
pool limits may be reached by the combined workload against multiple databases in the same pool.
For example, if a query generates 1000 IOPS without any I/O resource governance, but the workload group
maximum IOPS limit is set to 900 IOPS, the query won't be able to generate more than 900 IOPS. However, if
the resource pool maximum IOPS limit is set to 1500 IOPS, and the total I/O from all workload groups
associated with the resource pool exceeds 1500 IOPS, then the I/O of the same query may be reduced below the
workgroup limit of 900 IOPS.
The IOPS and throughput max values returned by the sys.dm_user_db_resource_governance view act as
limits/caps, not as guarantees. Further, resource governance doesn't guarantee any specific storage latency. The
best achievable latency, IOPS, and throughput for a given user workload depend not only on I/O resource
governance limits, but also on the mix of I/O sizes used, and on the capabilities of the underlying storage. SQL
Database uses I/Os that vary in size between 512 KB and 4 MB. For the purposes of enforcing IOPS limits, every
I/O is accounted regardless of its size, with the exception of databases with data files in Azure Storage. In that
case, IOs larger than 256 KB are accounted as multiple 256-KB I/Os, to align with Azure Storage I/O accounting.
For Basic, Standard, and General Purpose databases, which use data files in Azure Storage, the
primary_group_max_io value may not be achievable if a database doesn't have enough data files to cumulatively
provide this number of IOPS, or if data isn't distributed evenly across files, or if the performance tier of
underlying blobs limits IOPS/throughput below the resource governance limits. Similarly, with small log IOs
generated by frequent transaction commits, the primary_max_log_rate value may not be achievable by a
workload due to the IOPS limit on the underlying Azure Storage blob. For databases using Azure Premium
Storage, Azure SQL Database uses sufficiently large storage blobs to obtain needed IOPS/throughput, regardless
of database size. For larger databases, multiple data files are created to increase total IOPS/throughput capacity.
Resource utilization values such as avg_data_io_percent and avg_log_write_percent , reported in the
sys.dm_db_resource_stats, sys.resource_stats, and sys.elastic_pool_resource_stats views, are calculated as
percentages of maximum resource governance limits. Therefore, when factors other than resource governance
limit IOPS/throughput, it's possible to see IOPS/throughput flattening out and latencies increasing as the
workload increases, even though reported resource utilization remains below 100%.
To determine read and write IOPS, throughput, and latency per database file, use the
sys.dm_io_virtual_file_stats() function. This function surfaces all I/O against the database, including background
I/O that isn't accounted towards avg_data_io_percent , but uses IOPS and throughput of the underlying storage,
and can impact observed storage latency. The function reports additional latency that may be introduced by I/O
resource governance for reads and writes, in the io_stall_queued_read_ms and io_stall_queued_write_ms
columns respectively.
Transaction log rate governance
Transaction log rate governance is a process in Azure SQL Database used to limit high ingestion rates for
workloads such as bulk insert, SELECT INTO, and index builds. These limits are tracked and enforced at the
subsecond level to the rate of log record generation, limiting throughput regardless of how many IOs may be
issued against data files. Transaction log generation rates currently scale linearly up to a point that is hardware-
dependent and service tier-dependent.
Log rates are set such that they can be achieved and sustained in a variety of scenarios, while the overall system
can maintain its functionality with minimized impact to the user load. Log rate governance ensures that
transaction log backups stay within published recoverability SLAs. This governance also prevents an excessive
backlog on secondary replicas, that could otherwise lead to longer than expected downtime during failovers.
The actual physical IOs to transaction log files are not governed or limited. As log records are generated, each
operation is evaluated and assessed for whether it should be delayed in order to maintain a maximum desired
log rate (MB/s per second). The delays aren't added when the log records are flushed to storage, rather log rate
governance is applied during log rate generation itself.
The actual log generation rates imposed at run time may also be influenced by feedback mechanisms,
temporarily reducing the allowable log rates so the system can stabilize. Log file space management, avoiding
running into out of log space conditions and data replication mechanisms can temporarily decrease the overall
system limits.
Log rate governor traffic shaping is surfaced via the following wait types (exposed in the sys.dm_exec_requests
and sys.dm_os_wait_stats views):

WA IT T Y P E N OT ES

LOG_RATE_GOVERNOR Database limiting

POOL_LOG_RATE_GOVERNOR Pool limiting

INSTANCE_LOG_RATE_GOVERNOR Instance level limiting

HADR_THROTTLE_LOG_RATE_SEND_RECV_QUEUE_SIZE Feedback control, availability group physical replication in


Premium/Business Critical not keeping up

HADR_THROTTLE_LOG_RATE_LOG_SIZE Feedback control, limiting rates to avoid an out of log space


condition

HADR_THROTTLE_LOG_RATE_MISMATCHED_SLO Geo-replication feedback control, limiting log rate to avoid


high data latency and unavailability of geo-secondaries

When encountering a log rate limit that is hampering desired scalability, consider the following options:
Scale up to a higher service level in order to get the maximum log rate of a service tier, or switch to a
different service tier. The Hyperscale service tier provides 100 MB/s log rate regardless of chosen service
level.
If data being loaded is transient, such as staging data in an ETL process, it can be loaded into tempdb (which
is minimally logged).
For analytic scenarios, load into a clustered columnstore table, or a table with indexes that use data
compression. This reduces the required log rate. This technique does increase CPU utilization and is only
applicable to data sets that benefit from clustered columnstore indexes or data compression.
Storage space governance
In Premium and Business Critical service tiers, customer data including data files, transaction log files, and
tempdb files is stored on the local SSD storage of the machine hosting the database or elastic pool. Local SSD
storage provides high IOPS and throughput, and low I/O latency. In addition to customer data, local storage is
used for the operating system, management software, monitoring data and logs, and other files necessary for
system operation.
The size of local storage is finite and depends on hardware capabilities, which determine the maximum local
storage limit, or local storage set aside for customer data. This limit is set to maximize customer data storage,
while ensuring safe and reliable system operation. To find the maximum local storage value for each service
objective, see resource limits documentation for single databases and elastic pools.
You can also find this value, and the amount of local storage currently used by a given database or elastic pool,
using the following query:

SELECT server_name, database_name, slo_name, user_data_directory_space_quota_mb,


user_data_directory_space_usage_mb
FROM sys.dm_user_db_resource_governance
WHERE database_id = DB_ID();
C O L UM N DESC RIP T IO N

server_name Logical server name

database_name Database name

slo_name Service objective name, including hardware generation

user_data_directory_space_quota_mb Maximum local storage , in MB

user_data_directory_space_usage_mb Current local storage consumption by data files, transaction


log files, and tempdb files, in MB. Updated every five
minutes.

This query should be executed in the user database, not in the master database. For elastic pools, the query can
be executed in any database in the pool. Reported values apply to the entire pool.

IMPORTANT
In Premium and Business Critical service tiers, if the workload attempts to increase combined local storage consumption
by data files, transaction log files, and tempdb files over the maximum local storage limit, an out-of-space error will
occur.

Local SSD storage is also used by databases in service tiers other than Premium and Business Critical for the
tempdb database and Hyperscale RBPEX cache. As databases are created, deleted, and increase or decrease in
size, total local storage consumption on a machine fluctuates over time. If the system detects that available local
storage on a machine is low, and a database or an elastic pool is at risk of running out of space, it will move the
database or elastic pool to a different machine with sufficient local storage available.
This move occurs in an online fashion, similarly to a database scaling operation, and has a similar impact,
including a short (seconds) failover at the end of the operation. This failover terminates open connections and
rolls back transactions, potentially impacting applications using the database at that time.
Because all data is copied to local storage volumes on different machines, moving larger databases in Premium
and Business Critical service tiers may require a substantial amount of time. During that time, if local space
consumption by a database or an elastic pool, or by the tempdb database grows rapidly, the risk of running out
of space increases. The system initiates database movement in a balanced fashion to minimize out-of-space
errors while avoiding unnecessary failovers.

Tempdb sizes
Size limits for tempdb in Azure SQL Database depend on the purchasing and deployment model.
To learn more, review tempdb size limits for:
vCore purchasing model: single databases, pooled databases
DTU purchasing model: single databases, pooled databases.

Next steps
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about DTUs and eDTUs, see DTUs and eDTUs.
For information about tempdb size limits, see single vCore databases, pooled vCore databases, single DTU
databases, and pooled DTU databases.
Resource limits for single databases using the vCore
purchasing model
7/12/2022 • 31 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides the detailed resource limits for single databases in Azure SQL Database using the vCore
purchasing model.
For DTU purchasing model limits for single databases on a server, see Overview of resource limits on a
server.
For DTU purchasing model resource limits for Azure SQL Database, see DTU resource limits single databases
and DTU resource limits elastic pools.
For elastic pool vCore resource limits, vCore resource limits - elastic pools.
For more information regarding the different purchasing models, see Purchasing models and service tiers.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Each read-only replica of a database has its own resources, such as vCores, memory, data IOPS, tempdb,
workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount for a single database using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API

IMPORTANT
For scaling guidance and considerations, see Scale a single database.

General purpose - serverless compute - Gen5


The serverless compute tier is currently available on Gen5 hardware only.
Gen5 hardware (part 1)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _S_GEN 5_1 GP _S_GEN 5_2 GP _S_GEN 5_4 GP _S_GEN 5_6 GP _S_GEN 5_8

Hardware Gen5 Gen5 Gen5 Gen5 Gen5

Min-max vCores 0.5-1 0.5-2 0.5-4 0.75-6 1.0-8


C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _S_GEN 5_1 GP _S_GEN 5_2 GP _S_GEN 5_4 GP _S_GEN 5_6 GP _S_GEN 5_8

Min-max 2.02-3 2.05-6 2.10-12 2.25-18 3.00-24


memory (GB)

Min-max auto- 60-10080 60-10080 60-10080 60-10080 60-10080


pause delay
(minutes)

Columnstore Yes 1 Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 512 1024 1024 1024 2048


(GB)

Max log size (GB) 154 307 307 307 461


2

Tempdb max 32 64 128 192 256


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 3 320 640 1280 1920 2560

Max log rate 4.5 9 18 27 36


(MBps)

Max concurrent 75 150 300 450 600


workers

Max concurrent 30,000 30,000 30,000 30,000 30,000


sessions

Number of 1 1 1 1 1
replicas

Multi-AZ Yes Yes Yes Yes Yes

Read Scale-out N/A N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 Service objectives with smallermax vCore configurations may have insufficient memory for creating and
using columnstore indexes. If encountering performance problems with columnstore, increase the max vCore
configuration to increase the max memory available.
2
2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _S_GEN 5_10 GP _S_GEN 5_12 GP _S_GEN 5_14 GP _S_GEN 5_16

Hardware Gen5 Gen5 Gen5 Gen5

Min-max vCores 1.25-10 1.50-12 1.75-14 2.00-16

Min-max memory 3.75-30 4.50-36 5.25-42 6.00-48


(GB)

Min-max auto-pause 60-10080 60-10080 60-10080 60-10080


delay (minutes)

Columnstore support Yes Yes Yes Yes

In-memory OLTP N/A N/A N/A N/A


storage (GB)

Max data size (GB) 2048 3072 3072 3072

Max log size (GB) 1 461 461 461 922

Tempdb max data 320 384 448 512


size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)


(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 2 3200 3840 4480 5120

Max log rate (MBps) 45 50 50 50

Max concurrent 750 900 1050 1200


workers

Max concurrent 30,000 30,000 30,000 30,000


sessions

Number of replicas 1 1 1 1

Multi-AZ Yes Yes Yes Yes

Read Scale-out N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size


storage

1
1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 3)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _S_GEN 5_18 GP _S_GEN 5_20 GP _S_GEN 5_24 GP _S_GEN 5_32 GP _S_GEN 5_40

Hardware Gen5 Gen5 Gen5 Gen5 Gen5

Min-max vCores 2.25-18 2.5-20 3-24 4-32 5-40

Min-max 6.75-54 7.5-60 9-72 12-96 15-120


memory (GB)

Min-max auto- 60-10080 60-10080 60-10080 60-10080 60-10080


pause delay
(minutes)

Columnstore Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 3072 3072 4096 4096 4096


(GB)

Max log size (GB) 922 922 1024 1024 1024


1

Tempdb max 576 640 768 1024 1280


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 2 5760 6400 7680 10240 12800

Max log rate 50 50 50 50 50


(MBps)

Max concurrent 1350 1500 1800 2400 3000


workers

Max concurrent 30,000 30,000 30,000 30,000 30,000


sessions

Number of 1 1 1 1 1
replicas
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _S_GEN 5_18 GP _S_GEN 5_20 GP _S_GEN 5_24 GP _S_GEN 5_32 GP _S_GEN 5_40

Multi-AZ Yes Yes Yes Yes Yes

Read Scale-out N/A N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Hyperscale - provisioned compute - Gen5


Gen5 hardware (part 1)
C O M P UT E
SIZ E
( SERVIC E H S_GEN 5_1 H S_GEN 5_1 H S_GEN 5_1
O B JEC T IVE) H S_GEN 5_2 H S_GEN 5_4 H S_GEN 5_6 H S_GEN 5_8 0 2 4

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 2 4 6 8 10 12 14

Memory 10.4 20.8 31.1 41.5 51.9 62.3 72.7


(GB)

RBPEX Size 3X 3X 3X 3X 3X 3X 3X
Memory Memory Memory Memory Memory Memory Memory

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 100 100 100 100 100 100 100


size (TB)

Max log Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited


size (TB)

Tempdb 64 128 192 256 320 384 448


max data
size (GB)

Storage Note 1 Note 1 Note 1 Note 1 Note 1 Note 1 Note 1


type
C O M P UT E
SIZ E
( SERVIC E H S_GEN 5_1 H S_GEN 5_1 H S_GEN 5_1
O B JEC T IVE) H S_GEN 5_2 H S_GEN 5_4 H S_GEN 5_6 H S_GEN 5_8 0 2 4

Max local 8000 16000 24000 32000 40000 48000 56000


SSD IOPS 1

Max log 100 100 100 100 100 100 100


rate (MBps)

IO latency Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2


(approxima
te)

Max 200 400 600 800 1000 1200 1400


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Secondary 0-4 0-4 0-4 0-4 0-4 0-4 0-4


replicas

Multi-AZ Available in Available in Available in Available in Available in Available in Available in


preview preview preview preview preview preview preview

Read Scale- Yes Yes Yes Yes Yes Yes Yes


out

Backup 7 days 7 days 7 days 7 days 7 days 7 days 7 days


storage
retention

1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E H S_GEN 5_1 H S_GEN 5_1 H S_GEN 5_2 H S_GEN 5_2 H S_GEN 5_3 H S_GEN 5_4 H S_GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 16 18 20 24 32 40 80

Memory 83 93.4 103.8 124.6 166.1 207.6 415.2


(GB)

RBPEX Size 3X 3X 3X 3X 3X 3X 3X
Memory Memory Memory Memory Memory Memory Memory

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support
C O M P UT E
SIZ E
( SERVIC E H S_GEN 5_1 H S_GEN 5_1 H S_GEN 5_2 H S_GEN 5_2 H S_GEN 5_3 H S_GEN 5_4 H S_GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 100 100 100 100 100 100 100


size (TB)

Max log Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited


size (TB)

Tempdb 512 576 640 768 1024 1280 2560


max data
size (GB)

Storage Note 1 Note 1 Note 1 Note 1 Note 1 Note 1 Note 1


type

Max local 64000 72000 80000 96000 128000 160000 204800


SSD IOPS 1

Max log 100 100 100 100 100 100 100


rate (MBps)

IO latency Note 2 Note 2 Note 2 Note 2 Note 2 Note 2 Note 2


(approxima
te)

Max 1600 1800 2000 2400 3200 4000 8000


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Secondary 0-4 0-4 0-4 0-4 0-4 0-4 0-4


replicas

Multi-AZ Available in Available in Available in Available in Available in Available in Available in


preview preview preview preview preview preview preview

Read Scale- Yes Yes Yes Yes Yes Yes Yes


out

Backup 7 days 7 days 7 days 7 days 7 days 7 days 7 days


storage
retention

1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
Notes
Note 1 : Hyperscale is a multi-tiered architecture with separate compute and storage components: Hyperscale
Service Tier Architecture
Note 2 : Latency is 1-2 ms for data on local compute replica SSD, which caches most used data pages. Higher
latency for data retrieved from page servers.

Hyperscale - provisioned compute - DC-series


C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) H S_DC _2 H S_DC _4 H S_DC _6 H S_DC _8

Hardware DC-series DC-series DC-series DC-series

vCores 2 4 6 8

Memory (GB) 9 18 27 36

RBPEX Size 3X Memory 3X Memory 3X Memory 3X Memory

Columnstore support Yes Yes Yes Yes

In-memory OLTP N/A N/A N/A N/A


storage (GB)

Max data size (TB) 100 100 100 100

Max log size (TB) Unlimited Unlimited Unlimited Unlimited

Tempdb max data 64 128 192 256


size (GB)

Storage type Note 1 Note 1 Note 1 Note 1

Max local SSD IOPS 1 14000 28000 42000 44800

Max log rate (MBps) 100 100 100 100

IO latency Note 2 Note 2 Note 2 Note 2


(approximate)

Max concurrent 160 320 480 640


workers

Max concurrent 30,000 30,000 30,000 30,000


sessions

Secondary replicas 0-4 0-4 0-4 0-4

Multi-AZ N/A N/A N/A N/A

Read Scale-out Yes Yes Yes Yes

Backup storage 7 days 7 days 7 days 7 days


retention

1
1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
Notes
Note 1 : Hyperscale is a multi-tiered architecture with separate compute and storage components: Hyperscale
Service Tier Architecture
Note 2 : Latency is 1-2 ms for data on local compute replica SSD, which caches most used data pages. Higher
latency for data retrieved from page servers.

General purpose - provisioned compute - Gen5


Gen5 hardware (part 1)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_1
O B JEC T IVE) GP _GEN 5_2 GP _GEN 5_4 GP _GEN 5_6 GP _GEN 5_8 0 2 4

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 2 4 6 8 10 12 14

Memory 10.4 20.8 31.1 41.5 51.9 62.3 72.7


(GB)

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 1024 1024 1536 2048 2048 3072 3072


size (GB)

Max log 307 307 461 461 461 922 922


size (GB) 1

Tempdb 64 128 192 256 320 384 384


max data
size (GB)

Storage Remote Remote Remote Remote Remote Remote Remote


type SSD SSD SSD SSD SSD SSD SSD

IO latency 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 640 1280 1920 2560 3200 3840 4480


IOPS 2

Max log 9 18 27 36 45 50 50
rate (MBps)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_1
O B JEC T IVE) GP _GEN 5_2 GP _GEN 5_4 GP _GEN 5_6 GP _GEN 5_8 0 2 4

Max 200 400 600 800 1000 1200 1400


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 1 1 1 1 1 1 1
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- N/A N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 16 18 20 24 32 40 80

Memory 83 93.4 103.8 124.6 166.1 207.6 415.2


(GB)

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 3072 3072 3072 4096 4096 4096 4096


size (GB)

Max log 922 922 922 1024 1024 1024 1024


size (GB) 1
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Tempdb 512 576 640 768 1024 1280 2560


max data
size (GB)

Storage Remote Remote Remote Remote Remote Remote Remote


type SSD SSD SSD SSD SSD SSD SSD

IO latency 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 5120 5760 6400 7680 10240 12800 12800


IOPS 2

Max log 50 50 50 50 50 50 50
rate (MBps)

Max 1600 1800 2000 2400 3200 4000 8000


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 1 1 1 1 1 1 1
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- N/A N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

General purpose - provisioned compute - Fsv2-series


Fsv2-series Hardware (part 1)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _F SV2_8 GP _F SV2_10 GP _F SV2_12 GP _F SV2_14 GP _F SV2_16

Hardware Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series


C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _F SV2_8 GP _F SV2_10 GP _F SV2_12 GP _F SV2_14 GP _F SV2_16

vCores 8 10 12 14 16

Memory (GB) 15.1 18.9 22.7 26.5 30.2

Columnstore Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 1024 1024 1024 1024 1536


(GB)

Max log size (GB) 336 336 336 336 512


1

Tempdb max 37 46 56 65 74
data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 2 2560 3200 3840 4480 5120

Max log rate 36 45 50 50 50


(MBps)

Max concurrent 400 500 600 700 800


workers

Max concurrent 800 1000 1200 1400 1600


logins

Max concurrent 30,000 30,000 30,000 30,000 30,000


sessions

Number of 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A

Read Scale-out N/A N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Fsv2-series hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72

Hardware Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series

vCores 18 20 24 32 36 72

Memory (GB) 34.0 37.8 45.4 60.5 68.0 136.0

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 1536 1536 1536 3072 3072 4096


(GB)

Max log size 512 512 512 1024 1024 1024


(GB) 1

Tempdb max 83 93 111 148 167 333


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)

Max data 5760 6400 7680 10240 11520 12800


IOPS 2

Max log rate 50 50 50 50 50 50


(MBps)

Max 900 1000 1200 1600 1800 3600


concurrent
workers

Max 1800 2000 2400 3200 3600 7200


concurrent
logins

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 1 1 1 1 1 1
replicas
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

General purpose - provisioned compute - DC-series


C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _DC _2 GP _DC _4 GP _DC _6 GP _DC _8

Hardware DC-series DC-series DC-series DC-series

vCores 2 4 6 8

Memory (GB) 9 18 27 36

Columnstore support Yes Yes Yes Yes

In-memory OLTP N/A N/A N/A N/A


storage (GB)

Max data size (GB) 1024 1536 3072 3072

Max log size (GB) 1 307 461 922 922

Tempdb max data 64 128 192 256


size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)


(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 2 640 1280 1920 2560

Max log rate (MBps) 9 18 27 36

Max concurrent 160 320 480 640


workers

Max concurrent 30,000 30,000 30,000 30,000


sessions
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _DC _2 GP _DC _4 GP _DC _6 GP _DC _8

Number of replicas 1 1 1 1

Multi-AZ N/A N/A N/A N/A

Read Scale-out N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Business critical - provisioned compute - Gen5


Gen5 hardware (part 1)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_1
O B JEC T IVE) B C _GEN 5_2 B C _GEN 5_4 B C _GEN 5_6 B C _GEN 5_8 0 2 4

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 2 4 6 8 10 12 14

Memory 10.4 20.8 31.1 41.5 51.9 62.3 72.7


(GB)

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory 1.57 3.14 4.71 6.28 8.65 11.02 13.39


OLTP
storage
(GB)

Max data 1024 1024 1536 2048 2048 3072 3072


size (GB)

Max log 307 307 461 461 461 922 922


size (GB) 1

Tempdb 64 128 192 256 320 384 448


max data
size (GB)

Max local 4829 4829 4829 4829 4829 4829 4829


storage size
(GB)

Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_1
O B JEC T IVE) B C _GEN 5_2 B C _GEN 5_4 B C _GEN 5_6 B C _GEN 5_8 0 2 4

IO latency 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 8000 16,000 24,000 32,000 40,000 48,000 56,000


IOPS 2

Max log 24 48 72 96 96 96 96
rate (MBps)

Max 200 400 600 800 1000 1200 1400


concurrent
workers

Max 200 400 600 800 1000 1200 1400


concurrent
logins

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 4 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen5 hardware (part 2)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 16 18 20 24 32 40 80

Memory 83 93.4 103.8 124.6 166.1 207.6 415.2


(GB)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory 15.77 18.14 20.51 25.25 37.94 52.23 131.64


OLTP
storage
(GB)

Max data 3072 3072 3072 4096 4096 4096 4096


size (GB)

Max log 922 922 922 1024 1024 1024 1024


size (GB) 1

Tempdb 512 576 640 768 1024 1280 2560


max data
size (GB)

Max local 4829 4829 4829 4829 4829 4829 4829


storage size
(GB)

Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type

IO latency 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 64,000 72,000 80,000 96,000 128,000 160,000 204,800


IOPS 2

Max log 96 96 96 96 96 96 96
rate (MBps)

Max 1600 1800 2000 2400 3200 4000 8000


concurrent
workers

Max 1600 1800 2000 2400 3200 4000 8000


concurrent
logins

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 4 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes


C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Read Scale- Yes Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Business Critical - provisioned compute - M-series


For important information about M-series hardware availability, see Azure offer types supported by M-series.
M -series hardware (part 1)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _M _8 B C _M _10 B C _M _12 B C _M _14 B C _M _16 B C _M _18

Hardware M-series M-series M-series M-series M-series M-series

vCores 8 10 12 14 16 18

Memory (GB) 235.4 294.3 353.2 412.0 470.9 529.7

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory 64 80 96 112 128 150


OLTP storage
(GB)

Max data size 512 640 768 896 1024 1152


(GB)

Max log size 171 213 256 299 341 384


(GB) 1

Tempdb max 256 320 384 448 512 576


data size (GB)

Max local 13836 13836 13836 13836 13836 13836


storage size
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _M _8 B C _M _10 B C _M _12 B C _M _14 B C _M _16 B C _M _18

Max data 12,499 15,624 18,748 21,873 24,998 28,123


IOPS 2

Max log rate 48 60 72 84 96 108


(MBps)

Max 800 1,000 1,200 1,400 1,600 1,800


concurrent
workers

Max 800 1,000 1,200 1,400 1,600 1,800


concurrent
logins

Max 30000 30000 30000 30000 30000 30000


concurrent
sessions

Number of 4 4 4 4 4 4
replicas

Multi-AZ No No No No No No

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
M -series hardware (part 2)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _M _20 B C _M _24 B C _M _32 B C _M _64 B C _M _128

Hardware M-series M-series M-series M-series M-series

vCores 20 24 32 64 128

Memory (GB) 588.6 706.3 941.8 1883.5 3767.0

Columnstore Yes Yes Yes Yes Yes


support

In-memory 172 216 304 704 1768


OLTP storage
(GB)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _M _20 B C _M _24 B C _M _32 B C _M _64 B C _M _128

Max data size 1280 1536 2048 4096 4096


(GB)

Max log size (GB) 427 512 683 1024 1024


1

Tempdb max 640 768 1024 2048 4096


data size (GB)

Max local 13836 13836 13836 13836 13836


storage size (GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data IOPS 2 31,248 37,497 49,996 99,993 160,000

Max log rate 120 144 192 264 264


(MBps)

Max concurrent 2,000 2,400 3,200 6,400 12,800


workers

Max concurrent 2,000 2,400 3,200 6,400 12,800


logins

Max concurrent 30000 30000 30000 30000 30000


sessions

Number of 4 4 4 4 4
replicas

Multi-AZ No No No No No

Read Scale-out Yes Yes Yes Yes Yes

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Business Critical - provisioned compute - DC-series


C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) B C _DC _2 B C _DC _4 B C _DC _6 B C _DC _8

Hardware DC-series DC-series DC-series DC-series

vCores 2 4 6 8

Memory (GB) 9 18 27 36

Columnstore support Yes Yes Yes Yes

In-memory OLTP 1.7 3.7 5.9 8.2


storage (GB)

Max data size (GB) 768 768 768 768

Max log size (GB) 1 230 230 230 230

Tempdb max data 64 128 192 256


size (GB)

Max local storage 1406 1406 1406 1406


size (GB)

Storage type Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)


(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data IOPS 2 14000 28000 42000 44800

Max log rate (MBps) 24 48 72 96

Max concurrent 200 400 600 800


workers

Max concurrent 200 400 600 800


logins

Max concurrent 30,000 30,000 30,000 30,000


sessions

Number of replicas 4 4 4 4

Multi-AZ No No No No

Read Scale-out No No No No

Included backup 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Previously available hardware
This section includes details on previously available hardware.

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

Hyperscale - provisioned compute - Gen4


Gen4 hardware (part 1)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) H S_GEN 4_1 H S_GEN 4_2 H S_GEN 4_3 H S_GEN 4_4 H S_GEN 4_5 H S_GEN 4_6

hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 1 2 3 4 5 6

Memory (GB) 7 14 21 28 35 42

RBPEX Size 3X Memory 3X Memory 3X Memory 3X Memory 3X Memory 3X Memory

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 100 100 100 100 100 100


(TB)

Max log size Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited


(TB)

Tempdb max 32 64 96 128 160 192


data size (GB)

Storage type Note 1 Note 1 Note 1 Note 1 Note 1 Note 1

Max local SSD 4000 8000 12000 16000 20000 24000


IOPS 1
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) H S_GEN 4_1 H S_GEN 4_2 H S_GEN 4_3 H S_GEN 4_4 H S_GEN 4_5 H S_GEN 4_6

Max log rate 100 100 100 100 100 100


(MBps)

IO latency Note 2 Note 2 Note 2 Note 2 Note 2 Note 2


(approximate)

Max 200 400 600 800 1000 1200


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Secondary 0-4 0-4 0-4 0-4 0-4 0-4


replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- Yes Yes Yes Yes Yes Yes


out

Backup 7 days 7 days 7 days 7 days 7 days 7 days


storage
retention

1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) H S_GEN 4_7 H S_GEN 4_8 H S_GEN 4_9 H S_GEN 4_10 H S_GEN 4_16 H S_GEN 4_24

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 7 8 9 10 16 24

Memory (GB) 49 56 63 70 112 159.5

RBPEX Size 3X Memory 3X Memory 3X Memory 3X Memory 3X Memory 3X Memory

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 100 100 100 100 100 100


(TB)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) H S_GEN 4_7 H S_GEN 4_8 H S_GEN 4_9 H S_GEN 4_10 H S_GEN 4_16 H S_GEN 4_24

Max log size Unlimited Unlimited Unlimited Unlimited Unlimited Unlimited


(TB)

Tempdb max 224 256 288 320 512 768


data size (GB)

Storage type Note 1 Note 1 Note 1 Note 1 Note 1 Note 1

Max local SSD 28000 32000 36000 40000 64000 76800


IOPS 1

Max log rate 100 100 100 100 100 100


(MBps)

IO latency Note 2 Note 2 Note 2 Note 2 Note 2 Note 2


(approximate)

Max 1400 1600 1800 2000 3200 4800


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Secondary 0-4 0-4 0-4 0-4 0-4 0-4


replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- Yes Yes Yes Yes Yes Yes


out

Backup 7 days 7 days 7 days 7 days 7 days 7 days


storage
retention

1 Besides local SSD IO, workloads will use remote page server IO. Effective IOPS will depend on workload. For
details, see Data IO Governance, and Data IO in resource utilization statistics.

General purpose - provisioned compute - Gen4


IMPORTANT
New Gen4 databases are no longer supported in the Australia East or Brazil South regions.

Gen4 hardware (part 1)


C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_1 GP _GEN 4_2 GP _GEN 4_3 GP _GEN 4_4 GP _GEN 4_5 GP _GEN 4_6

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 1 2 3 4 5 6

Memory (GB) 7 14 21 28 35 42

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 1024 1024 1536 1536 1536 3072


(GB)

Max log size 307 307 461 461 461 922


(GB) 1

Tempdb max 32 64 96 128 160 192


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)

Max data 320 640 960 1280 1600 1920


IOPS 2

Max log rate 4.5 9 13.5 18 22.5 27


(MBps)

Max 200 400 600 800 1000 1200


concurrent
workers

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 1 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_1 GP _GEN 4_2 GP _GEN 4_3 GP _GEN 4_4 GP _GEN 4_5 GP _GEN 4_6

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_7 GP _GEN 4_8 GP _GEN 4_9 GP _GEN 4_10 GP _GEN 4_16 GP _GEN 4_24

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 7 8 9 10 16 24

Memory (GB) 49 56 63 70 112 159.5

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 3072 3072 3072 3072 4096 4096


(GB)

Max log size 922 922 922 922 1229 1229


(GB) 1

Tempdb max 224 256 288 320 512 768


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)

Max data 2240 2560 2880 3200 5120 7680


IOPS 2

Max log rate 31.5 36 40.5 45 50 50


(MBps)

Max 1400 1600 1800 2000 3200 4800


concurrent
workers
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_7 GP _GEN 4_8 GP _GEN 4_9 GP _GEN 4_10 GP _GEN 4_16 GP _GEN 4_24

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 1 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Business critical - provisioned compute - Gen4


IMPORTANT
New Gen4 databases are no longer supported in the Australia East or Brazil South regions.

Gen4 hardware (part 1)


C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_1 B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 1 2 3 4 5 6

Memory (GB) 7 14 21 28 35 42

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory 1 2 3 4 5 6
OLTP storage
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

Max data size 1024 1024 1024 1024 1024 1024


(GB)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_1 B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6

Max log size 307 307 307 307 307 307


(GB) 1

Tempdb max 32 64 96 128 160 192


data size (GB)

Max local 1356 1356 1356 1356 1356 1356


storage size
(GB)

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data 4,000 8,000 12,000 16,000 20,000 24,000


IOPS 2

Max log rate 8 16 24 32 40 48


(MBps)

Max 200 400 600 800 1000 1200


concurrent
workers

Max 200 400 600 800 1000 1200


concurrent
logins

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
Gen4 hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 7 8 9 10 16 24

Memory (GB) 49 56 63 70 112 159.5

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory 7 8 9.5 11 20 36
OLTP storage
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

Max data size 1024 1024 1024 1024 1024 1024


(GB)

Max log size 307 307 307 307 307 307


(GB) 1

Tempdb max 224 256 288 320 512 768


data size (GB)

Max local 1356 1356 1356 1356 1356 1356


storage size
(GB)

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data 28,000 32,000 36,000 40,000 64,000 76,800


IOPS 2

Max log rate 56 64 64 64 64 64


(MBps)

Max 1400 1600 1800 2000 3200 4800


concurrent
workers

Max 1400 1600 1800 2000 3200 4800


concurrent
logins

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Number of 4 4 4 4 4 4
replicas
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24

Multi-AZ Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 For documented max data size values. Reducing max data size reduces max log size proportionally.
2 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.

Next steps
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for SQL Managed Instance, see SQL Managed Instance resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a server, see overview of resource limits on a server for information
about limits at the server and subscription levels.
Resource limits for single databases using the DTU
purchasing model - Azure SQL Database
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides the detailed resource limits for Azure SQL Database single databases using the DTU
purchasing model.
For DTU purchasing model limits for single databases on a server, see Overview of resource limits on a
server.
For DTU purchasing model resource limits for Azure SQL Database, see DTU resource limits single databases
and DTU resource limits elastic pools.
For vCore resource limits, see vCore resource limits - Azure SQL Database and vCore resource limits - elastic
pools.
For more information regarding the different purchasing models, see Purchasing models and service tiers.
Each read-only replica has its own resources such as DTUs, workers, and sessions. Each read-only replica is
subject to the resource limits detailed later in this article.

Single database: Storage sizes and compute sizes


The following tables show the resources available for a single database at each service tier and compute size.
You can set the service tier, compute size, and storage amount for a single database using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API

IMPORTANT
For scaling guidance and considerations, see Scale a single database

Basic service tier


C O M P UT E SIZ E B A SIC

Max DTUs 5

Included storage (GB) 2

Max storage (GB) 2

Max in-memory OLTP storage (GB) N/A

Max concurrent workers 30


C O M P UT E SIZ E B A SIC

Max concurrent sessions 300

IMPORTANT
The Basic service tier provides less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or greater is
recommended.
Regarding data storage, the Basic service tier is placed on Standard Page Blobs. Standard Page Blobs use hard disk drive
(HDD)-based storage media and are best suited for development, testing, and other infrequently accessed workloads that
are less sensitive to performance variability.

Standard service tier


C O M P UT E SIZ E S0 S1 S2 S3

Max DTUs 10 20 50 100

Included storage (GB) 250 250 250 250


1

Max storage (GB) 250 250 250 1024

Max in-memory N/A N/A N/A N/A


OLTP storage (GB)

Max concurrent 60 90 120 200


workers

Max concurrent 600 900 1200 2400


sessions

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.

IMPORTANT
The Standard S0, S1 and S2 tiers provide less than one vCore (CPU). For CPU-intensive workloads, a service tier of S3 or
greater is recommended.
Regarding data storage, the Standard S0 and S1 service tiers are placed on Standard Page Blobs. Standard Page Blobs use
hard disk drive (HDD)-based storage media and are best suited for development, testing, and other infrequently accessed
workloads that are less sensitive to performance variability.

Standard service tier (continued)


C O M P UT E SIZ E S4 S6 S7 S9 S12

Max DTUs 200 400 800 1600 3000

Included storage 250 250 250 250 250


(GB) 1

Max storage 1024 1024 1024 1024 1024


(GB)
C O M P UT E SIZ E S4 S6 S7 S9 S12

Max in-memory N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max concurrent 400 800 1600 3200 6000


workers

Max concurrent 4800 9600 19200 30000 30000


sessions

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
Premium service tier
C O M P UT E
SIZ E P1 P2 P4 P6 P 11 P 15

Max DTUs 125 250 500 1000 1750 4000

Included 500 500 500 500 4096 2 4096 2


storage (GB) 1

Max storage 1024 1024 1024 1024 4096 2 4096 2


(GB)

Max in- 1 2 4 8 14 32
memory
OLTP storage
(GB)

Max 200 400 800 1600 2800 6400


concurrent
workers

Max 30000 30000 30000 30000 30000 30000


concurrent
sessions

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 From 1024 GB up to 4096 GB in increments of 256 GB.

IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.

NOTE
For additional information on storage limits in the Premium service tier, see Storage space governance.

Tempdb sizes
The following table lists tempdb sizes for single databases in Azure SQL Database:

M A XIM UM TEMPDB DATA N UM B ER O F TEMPDB DATA M A XIM UM TEMPDB DATA


SERVIC E- L EVEL O B JEC T IVE F IL E SIZ E ( GB ) F IL ES SIZ E ( GB )

Basic 13.9 1 13.9

S0 13.9 1 13.9

S1 13.9 1 13.9

S2 13.9 1 13.9

S3 32 1 32

S4 32 2 64

S6 32 3 96

S7 32 6 192

S9 32 12 384

S12 32 12 384

P1 13.9 12 166.7

P2 13.9 12 166.7

P4 13.9 12 166.7

P6 13.9 12 166.7

P11 13.9 12 166.7

P15 13.9 12 166.7

Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for managed instances in Azure SQL Managed Instance, see SQL Managed Instance
resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Resource limits for elastic pools using the vCore
purchasing model
7/12/2022 • 36 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides the detailed resource limits for Azure SQL Database elastic pools and pooled databases
using the vCore purchasing model.
For DTU purchasing model limits for single databases on a server, see Overview of resource limits on a
server.
For DTU purchasing model resource limits for Azure SQL Database, see DTU resource limits single databases
and DTU resource limits elastic pools.
For vCore resource limits, see vCore resource limits - Azure SQL Database and vCore resource limits - elastic
pools.
For more information regarding the different purchasing models, see Purchasing models and service tiers.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Each read-only replica of an elastic pool has its own resources, such as vCores, memory, data IOPS, tempdb ,
workers, and sessions. Each read-only replica is subject to elastic pool resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API

IMPORTANT
For scaling guidance and considerations, see Scale an elastic pool.

If all vCores of an elastic pool are busy, then each database in the pool receives an equal amount of compute
resources to process queries. Azure SQL Database provides resource sharing fairness between databases by
ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of
resource otherwise guaranteed to each database when the vCore min per database is set to a non-zero value.

General purpose - provisioned compute - Gen5


General purpose service tier: Generation 5 compute platform (part 1)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_1
O B JEC T IVE) GP _GEN 5_2 GP _GEN 5_4 GP _GEN 5_6 GP _GEN 5_8 0 2 4

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 2 4 6 8 10 12 14

Memory 10.4 20.8 31.1 41.5 51.9 62.3 72.7


(GB)

Max 100 200 500 500 500 500 500


number
DBs per
pool 1

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 512 756 1536 2048 2048 2048 2048


size (GB)

Max log 154 227 461 461 461 614 614


size (GB) 2

TempDB 64 128 192 256 320 384 448


max data
size (GB)

Storage Premium Premium Premium Premium Premium Premium Premium


type (Remote) (Remote) (Remote) (Remote) (Remote) (Remote) (Remote)
Storage Storage Storage Storage Storage Storage Storage

IO latency 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 800 1600 2400 3200 4000 4800 5600


IOPS per
pool 3

Max log 12 24 36 48 60 62.5 62.5


rate per
pool
(MBps)

Max 210 420 630 840 1050 1260 1470


concurrent
workers per
pool 4
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_1
O B JEC T IVE) GP _GEN 5_2 GP _GEN 5_4 GP _GEN 5_6 GP _GEN 5_8 0 2 4

Max 210 420 630 840 1050 1260 1470


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1, 2 1...4 1...6 1...8 1...10 1...12 1...14
vCore
choices per
database

Number of 1 1 1 1 1 1 1
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- N/A N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
General purpose service tier: Generation 5 compute platform (part 2)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 16 18 20 24 32 40 80

Memory 83 93.4 103.8 124.6 166.1 207.6 415.2


(GB)
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Max 500 500 500 500 500 500 500


number
DBs per
pool 1

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory N/A N/A N/A N/A N/A N/A N/A


OLTP
storage
(GB)

Max data 2048 3072 3072 3072 4096 4096 4096


size (GB)

Max log 614 922 922 922 1229 1229 1229


size (GB) 2

TempDB 512 576 640 768 1024 1280 2560


max data
size (GB)

Storage Premium Premium Premium Premium Premium Premium Premium


type (Remote) (Remote) (Remote) (Remote) (Remote) (Remote) (Remote)
Storage Storage Storage Storage Storage Storage Storage

IO latency 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms 5-7 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 6,400 7,200 8,000 9,600 12,800 16,000 16,000


IOPS per
pool 3

Max log 62.5 62.5 62.5 62.5 62.5 62.5 62.5


rate per
pool
(MBps)

Max 1680 1890 2100 2520 3360 4200 8400


concurrent
workers per
pool 4

Max 1680 1890 2100 2520 3360 4200 8400


concurrent
logins per
pool 4
C O M P UT E
SIZ E
( SERVIC E GP _GEN 5_1 GP _GEN 5_1 GP _GEN 5_2 GP _GEN 5_2 GP _GEN 5_3 GP _GEN 5_4 GP _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...16 1...18 1...20 1...20, 24 1...20, 24, 1...16, 24, 1...16, 24,
vCore 32 32, 40 32, 40, 80
choices per
database

Number of 1 1 1 1 1 1 1
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- N/A N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

General purpose - provisioned compute - Fsv2-series


Fsv2-series Hardware (part 1)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _F SV2_8 GP _F SV2_10 GP _F SV2_12 GP _F SV2_14 GP _F SV2_16

Hardware Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series

vCores 8 10 12 14 16

Memory (GB) 15.1 18.9 22.7 26.5 30.2

Max number 500 500 500 500 500


DBs per pool 1
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) GP _F SV2_8 GP _F SV2_10 GP _F SV2_12 GP _F SV2_14 GP _F SV2_16

Columnstore Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 1024 1024 1024 1024 1536


(GB)

Max log size (GB) 336 336 336 336 512


2

TempDB max 37 46 56 65 74
data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS 2560 3200 3840 4480 5120


per pool 3

Max log rate per 48 60 62.5 62.5 62.5


pool (MBps)

Max concurrent 400 500 600 700 800


workers per pool
4

Max concurrent 800 1000 1200 1400 1600


logins per pool 4

Max concurrent 30,000 30,000 30,000 30,000 30,000


sessions

Min/max elastic 0-8 0-10 0-12 0-14 0-16


pool vCore
choices per
database

Number of 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A

Read Scale-out N/A N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 See Resource management in dense elastic pools for additional considerations.


2
2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Fsv2-series Hardware (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72

Hardware Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series Fsv2-series

vCores 18 20 24 32 36 72

Memory (GB) 34.0 37.8 45.4 60.5 68.0 136.0

Max number 500 500 500 500 500


DBs per pool
1

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 1536 1536 1536 3072 3072 4096


(GB)

Max log size 512 512 512 1024 1024 1024


(GB) 2

TempDB max 83 93 111 148 167 333


data size (GB)

Storage type Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD Remote SSD

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)

Max data 5760 6400 7680 10240 11520 12800


IOPS per pool
3

Max log rate 62.5 62.5 62.5 62.5 62.5 62.5


per pool
(MBps)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _F SV2_18 GP _F SV2_20 GP _F SV2_24 GP _F SV2_32 GP _F SV2_36 GP _F SV2_72

Max 900 1000 1200 1600 1800 3600


concurrent
workers per
pool 4

Max 1800 2000 2400 3200 3600 7200


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0-18 0-20 0-24 0-32 0-36 0-72


elastic pool
vCore choices
per database

Number of 1 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

General purpose - provisioned compute - DC-series


C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _DC _2 GP _DC _4 GP _DC _6 GP _DC _8

Hardware DC DC DC DC

vCores 2 4 6 8
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) GP _DC _2 GP _DC _4 GP _DC _6 GP _DC _8

Memory (GB) 9 18 27 36

Max number DBs per 100 400 400 400


pool 1

Columnstore support Yes Yes Yes Yes

In-memory OLTP N/A N/A N/A N/A


storage (GB)

Max data size (GB) 756 1536 2048 2048

Max log size (GB) 2 227 461 614 614

TempDB max data 64 128 192 256


size (GB)

Storage type Premium (Remote) Premium (Remote) Premium (Remote) Premium (Remote)
Storage Storage Storage Storage

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)


(approximate) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read) 5-10 ms (read)

Max data IOPS per 800 1600 2400 3200


pool 3

Max log rate per pool 12 24 36 48


(MBps)

Max concurrent 168 336 504 672


workers per pool 4

Max concurrent 168 336 504 672


logins per pool 4

Max concurrent 30,000 30,000 30,000 30,000


sessions

Min/max elastic pool 2 2...4 2...6 2...8


vCore choices per
database

Number of replicas 1 1 1 1

Multi-AZ N/A N/A N/A N/A

Read Scale-out N/A N/A N/A N/A

Included backup 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 See Resource management in dense elastic pools for additional considerations.


2
2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Business Critical - provisioned compute - Gen5


Business Critical service tier: Generation 5 compute platform (part 1)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 5_4 B C _GEN 5_6 B C _GEN 5_8 B C _GEN 5_10 B C _GEN 5_12 B C _GEN 5_14

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 4 6 8 10 12 14

Memory (GB) 20.8 31.1 41.5 51.9 62.3 72.7

Max number 50 100 100 100 100 100


DBs per pool
1

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory 3.14 4.71 6.28 8.65 11.02 13.39


OLTP storage
(GB)

Max data size 1024 1536 2048 2048 3072 3072


(GB)

Max log size 307 307 461 461 922 922


(GB) 2

TempDB max 128 192 256 320 384 448


data size (GB)

Max local 4829 4829 4829 4829 4829 4829


storage size
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data 18,000 27,000 36,000 45,000 54,000 63,000


IOPS per pool
3
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 5_4 B C _GEN 5_6 B C _GEN 5_8 B C _GEN 5_10 B C _GEN 5_12 B C _GEN 5_14

Max log rate 60 90 120 120 120 120


per pool
(MBps)

Max 420 630 840 1050 1260 1470


concurrent
workers per
pool 4

Max 420 630 840 1050 1260 1470


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...4 1...6 1...8 1...10 1...12 1...14
vCore choices
per database

Number of 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Business Critical service tier: Generation 5 compute platform (part 2)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Hardware Gen5 Gen5 Gen5 Gen5 Gen5 Gen5 Gen5

vCores 16 18 20 24 32 40 80

Memory 83 93.4 103.8 124.6 166.1 207.6 415.2


(GB)

Max 100 100 100 100 100 100 100


number
DBs per
pool 1

Columnstor Yes Yes Yes Yes Yes Yes Yes


e support

In-memory 15.77 18.14 20.51 25.25 37.94 52.23 131.68


OLTP
storage
(GB)

Max data 3072 3072 3072 4096 4096 4096 4096


size (GB)

Max log 922 922 922 1229 1229 1229 1229


size (GB) 2

TempDB 512 576 640 768 1024 1280 2560


max data
size (GB)

Max local 4829 4829 4829 4829 4829 4829 4829


storage size
(GB)

Storage Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD
type

IO latency 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms


(approxima (write) (write) (write) (write) (write) (write) (write)
te) 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms 1-2 ms
(read) (read) (read) (read) (read) (read) (read)

Max data 72,000 81,000 90,000 108,000 144,000 180,000 256,000


IOPS per
pool 3

Max log 120 120 120 120 120 120 120


rate per
pool
(MBps)
C O M P UT E
SIZ E
( SERVIC E B C _GEN 5_1 B C _GEN 5_1 B C _GEN 5_2 B C _GEN 5_2 B C _GEN 5_3 B C _GEN 5_4 B C _GEN 5_8
O B JEC T IVE) 6 8 0 4 2 0 0

Max 1680 1890 2100 2520 3360 4200 8400


concurrent
workers per
pool 4

Max 1680 1890 2100 2520 3360 4200 8400


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...16 1...18 1...20 1...20, 24 1...20, 24, 1...20, 24, 1...20, 24,
vCore 32 32, 40 32, 40, 80
choices per
database

Number of 4 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Business Critical - provisioned compute - M-series


For important information about M-series hardware availability, see Azure offer types supported by M-series.
M -series hardware (part 1)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _M _8 B C _M _10 B C _M _12 B C _M _14 B C _M _16 B C _M _18

Hardware M-series M-series M-series M-series M-series M-series

vCores 8 10 12 14 16 18

Memory (GB) 235.4 294.3 353.2 412.0 470.9 529.7

Max number 100 100 100 100 100 100


DBs per pool
1

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory 64 80 96 112 128 150


OLTP storage
(GB)

Max data size 512 640 768 896 1024 1152


(GB)

Max log size 171 213 256 299 341 384


(GB) 2

TempDB max 256 320 384 448 512 576


data size (GB)

Max local 13836 13836 13836 13836 13836 13836


storage size
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data 12,499 15,624 18,748 21,873 24,998 28,123


IOPS per pool
3

Max log rate 48 60 72 84 96 108


per pool
(MBps)

Max 800 1,000 1,200 1,400 1,600 1,800


concurrent
workers per
pool 4

Max 800 1,000 1,200 1,400 1,600 1,800


concurrent
logins per
pool 4
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _M _8 B C _M _10 B C _M _12 B C _M _14 B C _M _16 B C _M _18

Max 30000 30000 30000 30000 30000 30000


concurrent
sessions

Min/max 0-8 0-10 0-12 0-14 0-16 0-18


elastic pool
vCore choices
per database

Number of 4 4 4 4 4 4
replicas

Multi-AZ No No No No No No

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
M -series hardware (part 2)
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _M _20 B C _M _24 B C _M _32 B C _M _64 B C _M _128

Hardware M-series M-series M-series M-series M-series

vCores 20 24 32 64 128

Memory (GB) 588.6 706.3 941.8 1883.5 3767.0

Max number 100 100 100 100 100


DBs per pool 1

Columnstore Yes Yes Yes Yes Yes


support
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _M _20 B C _M _24 B C _M _32 B C _M _64 B C _M _128

In-memory 172 216 304 704 1768


OLTP storage
(GB)

Max data size 1280 1536 2048 4096 4096


(GB)

Max log size (GB) 427 512 683 1024 1024


2

TempDB max 640 768 1024 2048 4096


data size (GB)

Max local 13836 13836 13836 13836 13836


storage size (GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data IOPS 31,248 37,497 49,996 99,993 160,000


per pool 3

Max log rate per 120 144 192 264 264


pool (MBps)

Max concurrent 2,000 2,400 3,200 6,400 12,800


workers per pool
4

Max concurrent 2,000 2,400 3,200 6,400 12,800


logins per pool 4

Max concurrent 30000 30000 30000 30000 30000


sessions

Number of 4 4 4 4 4
replicas

Multi-AZ No No No No No

Read Scale-out Yes Yes Yes Yes Yes

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Business Critical - provisioned compute - DC-series


C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) B C _DC _2 B C _DC _4 B C _DC _6 B C _DC _8

Hardware DC DC DC DC

vCores 2 4 6 8

Memory (GB) 9 18 27 36

Max number DBs per 50 100 100 100


pool 1

Columnstore support Yes Yes Yes Yes

In-memory OLTP 1.7 3.7 5.9 8.2


storage (GB)

Max data size (GB) 768 768 768 768

Max log size (GB) 2 230 230 230 230

TempDB max data 64 128 192 256


size (GB)

Max local storage 1406 1406 1406 1406


size (GB)

Storage type Local SSD Local SSD Local SSD Local SSD

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)


(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data IOPS per 15750 31500 47250 56000


pool 3

Max log rate per pool 20 60 90 120


(MBps)

Max concurrent 168 336 504 672


workers per pool 4

Max concurrent 168 336 504 672


logins per pool 4

Max concurrent 30,000 30,000 30,000 30,000


sessions
C O M P UT E SIZ E
( SERVIC E O B JEC T IVE) B C _DC _2 B C _DC _4 B C _DC _6 B C _DC _8

Min/max elastic pool 2 2...4 2...6 2...8


vCore choices per
database

Number of replicas 4 4 4 4

Multi-AZ No No No No

Read Scale-out Yes Yes Yes Yes

Included backup 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Database properties for pooled databases


For each elastic pool, you can optionally specify per database minimum and maximum vCores to modify
resource consumption patterns within the pool. Specified min and max values apply to all databases in the pool.
Customizing min and max vCores for individual databases in the pool is not supported.
You can also set maximum storage per database, for example to prevent a database from consuming all pool
storage. This setting can be configured independently for each database.
The following table describes per database properties for pooled databases.

P RO P ERT Y DESC RIP T IO N

Max vCores per database The maximum number of vCores that any database in the
pool may use, if available based on utilization by other
databases in the pool. Max vCores per database is not a
resource guarantee for a database. If the workload in each
database does not need all available pool resources to
perform adequately, consider setting max vCores per
database to prevent a single database from monopolizing
pool resources. Some degree of over-committing is expected
since the pool generally assumes hot and cold usage
patterns for databases, where all databases are not
simultaneously peaking.
P RO P ERT Y DESC RIP T IO N

Min vCores per database The minimum number of vCores reserved for any database
in the pool. Consider setting a min vCores per database
when you want to guarantee resource availability for each
database regardless of resource consumption by other
databases in the pool. The min vCores per database may be
set to 0, and is also the default value. This property is set to
anywhere between 0 and the average vCores utilization per
database.

Max storage per database The maximum database size set by the user for a database in
a pool. Pooled databases share allocated pool storage, so the
size a database can reach is limited to the smaller of
remaining pool storage and maximum database size.
Maximum database size refers to the maximum size of the
data files and does not include the space used by the log file.

IMPORTANT
Because resources in an elastic pool are finite, setting min vCores per database to a value greater than 0 implicitly limits
resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy
the min vCores guarantee are not available to databases active at that point in time.
Additionally, setting min vCores per database to a value greater than 0 implicitly limits the number of databases that can
be added to the pool. For example, if you set the min vCores to 2 in a 20 vCore pool, it means that you will not be able to
add more than 10 databases to the pool, because 2 vCores are reserved for each database.

Even though the per database properties are expressed in vCores, they also govern consumption of other
resource types, such as data IO, log IO, buffer pool memory, and worker threads. As you adjust min and max per
database vCore values, reservations and limits for all resource types are adjusted proportionally.
Min and max per database vCore values apply to resource consumption by user workloads, but not to resource
consumption by internal processes. For example, for a database with a per database max vCores set to half of
the pool vCores, user workload cannot consume more than one half of the buffer pool memory. However, this
database can still take advantage of pages in the buffer pool that were loaded by internal processes. For more
information, see Resource consumption by user workloads and internal processes.

NOTE
The resource limits of individual databases in elastic pools are generally the same as for single databases outside of pools
that have the same compute size (service objective). For example, the max concurrent workers for an GP_S_Gen5_10
database is 750 workers. So, the max concurrent workers for a database in a GP_Gen5_10 pool is also 750 workers. Note,
the total number of concurrent workers in GP_Gen5_10 pool is 1050. For the max concurrent workers for any individual
database, see Single database resource limits.

Previously available hardware


This section includes details on previously available hardware.
IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

General purpose - provisioned compute - Gen4


General purpose service tier: Generation 4 compute platform (part 1)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_1 GP _GEN 4_2 GP _GEN 4_3 GP _GEN 4_4 GP _GEN 4_5 GP _GEN 4_6

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 1 2 3 4 5 6

Memory (GB) 7 14 21 28 35 42

Max number 100 200 500 500 500 500


DBs per pool
1

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 512 756 1536 1536 1536 2048


(GB)

Max log size 2 154 227 461 461 461 614

TempDB max 32 64 96 128 160 192


data size (GB)

Storage type Premium Premium Premium Premium Premium Premium


(Remote) (Remote) (Remote) (Remote) (Remote) (Remote)
Storage Storage Storage Storage Storage Storage

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_1 GP _GEN 4_2 GP _GEN 4_3 GP _GEN 4_4 GP _GEN 4_5 GP _GEN 4_6

Max data 400 800 1200 1600 2000 2400


IOPS per pool
3

Max log rate 6 12 18 24 30 36


per pool
(MBps)

Max 210 420 630 840 1050 1260


concurrent
workers per
pool4

Max 210 420 630 840 1050 1260


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 1 0, 0.25, 0.5, 1, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 2 1...3 1...4 1...5 1...6
vCore choices
per database

Number of 1 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
General purpose service tier: Generation 4 compute platform (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_7 GP _GEN 4_8 GP _GEN 4_9 GP _GEN 4_10 GP _GEN 4_16 GP _GEN 4_24

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 7 8 9 10 16 24

Memory (GB) 49 56 63 70 112 159.5

Max number 500 500 500 500 500 500


DBs per pool
1

Columnstore Yes Yes Yes Yes Yes Yes


support

In-memory N/A N/A N/A N/A N/A N/A


OLTP storage
(GB)

Max data size 2048 2048 2048 2048 3584 4096


(GB)

Max log size 614 614 614 614 1075 1229


(GB) 2

TempDB max 224 256 288 320 512 768


data size (GB)

Storage type Premium Premium Premium Premium Premium Premium


(Remote) (Remote) (Remote) (Remote) (Remote) (Remote)
Storage Storage Storage Storage Storage Storage

IO latency 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write) 5-7 ms (write)
(approximate) 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms 5-10 ms
(read) (read) (read) (read) (read) (read)

Max data 2800 3200 3600 4000 6400 9600


IOPS per pool
3

Max log rate 42 48 54 60 62.5 62.5


per pool
(MBps)

Max 1470 1680 1890 2100 3360 5040


concurrent
workers per
pool 4

Max 1470 1680 1890 2100 3360 5040


concurrent
logins pool 4
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) GP _GEN 4_7 GP _GEN 4_8 GP _GEN 4_9 GP _GEN 4_10 GP _GEN 4_16 GP _GEN 4_24

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...7 1...8 1...9 1...10 1...10, 16 1...10, 16, 24
vCore choices
per database

Number of 1 1 1 1 1 1
replicas

Multi-AZ N/A N/A N/A N/A N/A N/A

Read Scale- N/A N/A N/A N/A N/A N/A


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Business critical - provisioned compute - Gen4


IMPORTANT
New Gen4 databases are no longer supported in the Australia East or Brazil South regions.

Business critical service tier: Generation 4 compute platform (part 1)


C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6

Hardware Gen4 Gen4 Gen4 Gen4 Gen4

vCores 2 3 4 5 6

Memory (GB) 14 21 28 35 42
C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6

Max number 50 100 100 100 100


DBs per pool 1

Columnstore Yes Yes Yes Yes Yes


support

In-memory 2 3 4 5 6
OLTP storage
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD

Max data size 1024 1024 1024 1024 1024


(GB)

Max log size (GB) 307 307 307 307 307


2

TempDB max 64 96 128 160 192


data size (GB)

Max local 1356 1356 1356 1356 1356


storage size (GB)

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data IOPS 9,000 13,500 18,000 22,500 27,000


per pool 3

Max log rate per 20 30 40 50 60


pool (MBps)

Max concurrent 420 630 840 1050 1260


workers per pool
4

Max concurrent 420 630 840 1050 1260


logins per pool 4

Max concurrent 30,000 30,000 30,000 30,000 30,000


sessions

Min/max elastic 0, 0.25, 0.5, 1, 2 0, 0.25, 0.5, 1...3 0, 0.25, 0.5, 1...4 0, 0.25, 0.5, 1...5 0, 0.25, 0.5, 1...6
pool vCore
choices per
database

Number of 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes


C O M P UT E SIZ E
( SERVIC E
O B JEC T IVE) B C _GEN 4_2 B C _GEN 4_3 B C _GEN 4_4 B C _GEN 4_5 B C _GEN 4_6

Read Scale-out Yes Yes Yes Yes Yes

Included backup 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Business critical service tier: Generation 4 compute platform (part 2)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24

Hardware Gen4 Gen4 Gen4 Gen4 Gen4 Gen4

vCores 7 8 9 10 16 24

Memory (GB) 49 56 63 70 112 159.5

Max number 100 100 100 100 100 100


DBs per pool
1

Columnstore N/A N/A N/A N/A N/A N/A


support

In-memory 7 8 9.5 11 20 36
OLTP storage
(GB)

Storage type Local SSD Local SSD Local SSD Local SSD Local SSD Local SSD

Max data size 1024 1024 1024 1024 1024 1024


(GB)

Max log size 307 307 307 307 307 307


(GB) 2

TempDB max 224 256 288 320 512 768


data size (GB)
C O M P UT E
SIZ E ( SERVIC E
O B JEC T IVE) B C _GEN 4_7 B C _GEN 4_8 B C _GEN 4_9 B C _GEN 4_10 B C _GEN 4_16 B C _GEN 4_24

Max local 1356 1356 1356 1356 1356 1356


storage size
(GB)

IO latency 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write) 1-2 ms (write)
(approximate) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read) 1-2 ms (read)

Max data 31,500 36,000 40,500 45,000 72,000 96,000


IOPS per pool
3

Max log rate 70 80 80 80 80 80


per pool
(MBps)

Max 1470 1680 1890 2100 3360 5040


concurrent
workers per
pool 4

Max 1470 1680 1890 2100 3360 5040


concurrent
logins per
pool 4

Max 30,000 30,000 30,000 30,000 30,000 30,000


concurrent
sessions

Min/max 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5, 0, 0.25, 0.5,
elastic pool 1...7 1...8 1...9 1...10 1...10, 16 1...10, 16, 24
vCore choices
per database

Number of 4 4 4 4 4 4
replicas

Multi-AZ Yes Yes Yes Yes Yes Yes

Read Scale- Yes Yes Yes Yes Yes Yes


out

Included 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size 1X DB size


backup
storage

1 See Resource management in dense elastic pools for additional considerations.


2 For documented max data size values. Reducing max data size reduces max log size proportionally.
3 The maximum value for IO sizes ranging between 8 KB and 64 KB. Actual IOPS are workload-dependent. For
details, see Data IO Governance.
4 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For DTU resource limits for elastic pools, see resource limits for elastic pools using the DTU purchasing
model
For resource limits for managed instances, see managed instance resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Resources limits for elastic pools using the DTU
purchasing model
7/12/2022 • 14 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides the detailed resource limits for databases in Azure SQL Database that are within an elastic
pool using the DTU purchasing model.
For DTU purchasing model limits for single databases on a server, see Overview of resource limits on a
server.
For DTU purchasing model resource limits for Azure SQL Database, see DTU resource limits single databases
and DTU resource limits elastic pools.
For vCore resource limits, see vCore resource limits - Azure SQL Database and vCore resource limits - elastic
pools.
For more information regarding the different purchasing models, see Purchasing models and service tiers.
Each read-only replica has its own resources such as DTUs, workers, and sessions. Each read-only replica is
subject to the resource limits detailed later in this article.

Elastic pool: Storage sizes and compute sizes


For Azure SQL Database elastic pools, the following tables show the resources available at each service tier and
compute size. You can set the service tier, compute size, and storage amount using:
Transact-SQL via ALTER DATABASE
Azure portal
PowerShell
Azure CLI
REST API

IMPORTANT
For scaling guidance and considerations, see Scale an elastic pool

The resource limits of individual databases in elastic pools are generally the same as for single databases
outside of pools based on DTUs and the service tier. For example, the max concurrent workers for an S2
database is 120 workers. So, the max concurrent workers for a database in a Standard pool is also 120 workers
if the max DTU per database in the pool is 50 DTUs (which is equivalent to S2).
For the same number of DTUs, resources provided to an elastic pool may exceed the resources provided to a
single database outside of an elastic pool. This means it is possible for the eDTU utilization of an elastic pool to
be less than the summation of DTU utilization across databases within the pool, depending on workload
patterns. For example, in an extreme case with only one database in an elastic pool where database DTU
utilization is 100%, it is possible for pool eDTU utilization to be 50% for certain workload patterns. This can
happen even if max DTU per database remains at the maximum supported value for the given pool size.
NOTE
The storage per pool resource limit in each of the following tables do not include tempdb and log storage.

Basic elastic pool limits


EDT US
P ER
POOL 50 100 200 300 400 800 1200 1600

Included 5 10 20 29 39 78 117 156


storage
per pool
(GB)

Max 5 10 20 29 39 78 117 156


storage
per pool
(GB)

Max In- N/A N/A N/A N/A N/A N/A N/A N/A
Memory
OLTP
storage
per pool
(GB)

Max 100 200 500 500 500 500 500 500


number
DBs per
pool 1

Max 100 200 400 600 800 1600 2400 3200


concurre
nt
workers
per pool
2

Max 30000 30000 30000 30000 30000 30000 30000 30000


concurre
nt
sessions
per pool
2

Min DTU 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5 0, 5
per
database
choices

Max DTU 5 5 5 5 5 5 5 5
per
database
choices
EDT US
P ER
POOL 50 100 200 300 400 800 1200 1600

Max 2 2 2 2 2 2 2 2
storage
per
database
(GB)

1 See Resource management in dense elastic pools for additional considerations.


2 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Standard elastic pool limits
EDT US P ER
POOL 50 100 200 300 400 800

Included 50 100 200 300 400 800


storage per
pool (GB) 1

Max storage 500 750 1024 1280 1536 2048


per pool (GB)

Max In- N/A N/A N/A N/A N/A N/A


Memory
OLTP storage
per pool (GB)

Max number 100 200 500 500 500 500


DBs per pool
2

Max 100 200 400 600 800 1600


concurrent
workers per
pool 3

Max 30000 30000 30000 30000 30000 30000


concurrent
sessions per
pool 3

Min DTU per 0, 10, 20, 50 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50,
database 100 100, 200 100, 200, 300 100, 200, 100, 200,
choices 300, 400 300, 400, 800

Max DTU per 10, 20, 50 10, 20, 50, 10, 20, 50, 10, 20, 50, 10, 20, 50, 10, 20, 50,
database 100 100, 200 100, 200, 300 100, 200, 100, 200,
choices 300, 400 300, 400, 800
EDT US P ER
POOL 50 100 200 300 400 800

Max storage 1024 1024 1024 1024 1024 1024


per database
(GB)

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Standard elastic pool limits (continued)
EDT US P ER P O O L 1200 1600 2000 2500 3000

Included storage 1200 1600 2000 2500 3000


per pool (GB) 1

Max storage per 2560 3072 3584 4096 4096


pool (GB)

Max In-Memory N/A N/A N/A N/A N/A


OLTP storage
per pool (GB)

Max number 500 500 500 500 500


DBs per pool 2

Max concurrent 2400 3200 4000 5000 6000


workers per pool
3

Max concurrent 30000 30000 30000 30000 30000


sessions per pool
3

Min DTU per 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50, 0, 10, 20, 50,
database choices 100, 200, 300, 100, 200, 300, 100, 200, 300, 100, 200, 300, 100, 200, 300,
400, 800, 1200 400, 800, 1200, 400, 800, 1200, 400, 800, 1200, 400, 800, 1200,
1600 1600, 2000 1600, 2000, 1600, 2000,
2500 2500, 3000

Max DTU per 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100, 10, 20, 50, 100,
database choices 200, 300, 400, 200, 300, 400, 200, 300, 400, 200, 300, 400, 200, 300, 400,
800, 1200 800, 1200, 1600 800, 1200, 1600, 800, 1200, 1600, 800, 1200, 1600,
2000 2000, 2500 2000, 2500,
3000

Max storage per 1024 1536 1792 2304 2816


database (GB)

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Premium elastic pool limits
EDT US P ER P O O L 125 250 500 1000 1500

Included storage 250 500 750 1024 1536


per pool (GB) 1

Max storage per 1024 1024 1024 1024 1536


pool (GB)

Max In-Memory 1 2 4 10 12
OLTP storage
per pool (GB)

Max number 50 100 100 100 100


DBs per pool 2

Max concurrent 200 400 800 1600 2400


workers per pool
(requests) 3

Max concurrent 30000 30000 30000 30000 30000


sessions per pool
3

Min eDTUs per 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75,
database 125 125, 250 125, 250, 500 125, 250, 500, 125, 250, 500,
1000 1000

Max eDTUs per 25, 50, 75, 125 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125,
database 250 250, 500 250, 500, 1000 250, 500, 1000

Max storage per 1024 1024 1024 1024 1536


database (GB)

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.
Premium elastic pool limits (continued)
EDT US P ER P O O L 2000 2500 3000 3500 4000

Included storage 2048 2560 3072 3548 4096


per pool (GB) 1

Max storage per 2048 2560 3072 3548 4096


pool (GB)

Max In-Memory 16 20 24 28 32
OLTP storage
per pool (GB)

Max number 100 100 100 100 100


DBs per pool 2

Max concurrent 3200 4000 4800 5600 6400


workers per pool
3

Max concurrent 30000 30000 30000 30000 30000


sessions per pool
3

Min DTU per 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75, 0, 25, 50, 75,
database choices 125, 250, 500, 125, 250, 500, 125, 250, 500, 125, 250, 500, 125, 250, 500,
1000, 1750 1000, 1750 1000, 1750 1000, 1750 1000, 1750,
4000

Max DTU per 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125, 25, 50, 75, 125,
database choices 250, 500, 1000, 250, 500, 1000, 250, 500, 1000, 250, 500, 1000, 250, 500, 1000,
1750 1750 1750 1750 1750, 4000

Max storage per 2048 2560 3072 3584 4096


database (GB)

1 See SQL Database pricing options for details on additional cost incurred due to any extra storage provisioned.
2 See Resource management in dense elastic pools for additional considerations.
3 For the max concurrent workers for any individual database, see Single database resource limits. For example,
if the elastic pool is using Gen5 and the max vCore per database is set at 2, then the max concurrent workers
value is 200. If max vCore per database is set to 0.5, then the max concurrent workers value is 50 since on Gen5
there are a max of 100 concurrent workers per vCore. For other max vCore settings per database that are less 1
vCore or less, the number of max concurrent workers is similarly rescaled.

IMPORTANT
More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North,
Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For
more information, see P11-P15 current limitations.

If all DTUs of an elastic pool are used, then each database in the pool receives an equal amount of resources to
process queries. The SQL Database service provides resource sharing fairness between databases by ensuring
equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource
otherwise guaranteed to each database when the DTU min per database is set to a non-zero value.
NOTE
For additional information on storage limits in the Premium service tier, see Storage space governance.

Database properties for pooled databases


For each elastic pool, you can optionally specify per database minimum and maximum DTUs to modify resource
consumption patterns within the pool. Specified min and max values apply to all databases in the pool.
Customizing min and max DTUs for individual databases in the pool is not supported.
You can also set maximum storage per database, for example to prevent a database from consuming all pool
storage. This setting can be configured independently for each database.
The following table describes per database properties for pooled databases.

P RO P ERT Y DESC RIP T IO N

Max DTUs per database The maximum number of DTUs that any database in the
pool may use, if available based on utilization by other
databases in the pool. Max DTUs per database is not a
resource guarantee for a database. If the workload in each
database does not need all available pool resources to
perform adequately, consider setting max DTUs per database
to prevent a single database from monopolizing pool
resources. Some degree of over-committing is expected since
the pool generally assumes hot and cold usage patterns for
databases, where all databases are not simultaneously
peaking.

Min DTUs per database The minimum number of DTUs reserved for any database in
the pool. Consider setting a min DTUs per database when
you want to guarantee resource availability for each
database regardless of resource consumption by other
databases in the pool. The min DTUs per database may be
set to 0, and is also the default value. This property is set to
anywhere between 0 and the average DTUs utilization per
database.

Max storage per database The maximum database size set by the user for a database in
a pool. Pooled databases share allocated pool storage, so the
size a database can reach is limited to the smaller of
remaining pool storage and maximum database size.
Maximum database size refers to the maximum size of the
data files and does not include the space used by the log file.

IMPORTANT
Because resources in an elastic pool are finite, setting min DTUs per database to a value greater than 0 implicitly limits
resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy
the min DTUs guarantee are not available to databases active at that point in time.
Additionally, setting min DTUs per database to a value greater than 0 implicitly limits the number of databases that can be
added to the pool. For example, if you set the min DTUs to 100 in a 400 DTU pool, it means that you will not be able to
add more than 4 databases to the pool, because 100 DTUs are reserved for each database.

While the per database properties are expressed in DTUs, they also govern consumption of other resource types,
such as data IO, log IO, buffer pool memory, and worker threads. As you adjust min and max per database DTUs
values, reservations and limits for all resource types are adjusted proportionally.
Min and max per database DTU values apply to resource consumption by user workloads, but not to resource
consumption by internal processes. For example, for a database with a per database max DTU set to half of the
pool eDTU, user workload cannot consume more than one half of the buffer pool memory. However, this
database can still take advantage of pages in the buffer pool that were loaded by internal processes. For more
information, see Resource consumption by user workloads and internal processes.

Tempdb sizes
The following table lists tempdb sizes for single databases in Azure SQL Database:

M A XIM UM TEMPDB DATA N UM B ER O F TEMPDB DATA M A XIM UM TEMPDB DATA


SERVIC E- L EVEL O B JEC T IVE F IL E SIZ E ( GB ) F IL ES SIZ E ( GB )

Basic Elastic Pools (all DTU 13.9 12 166.7


configurations)

Standard Elastic Pools (50 13.9 12 166.7


eDTU)

Standard Elastic Pools (100 32 1 32


eDTU)

Standard Elastic Pools (200 32 2 64


eDTU)

Standard Elastic Pools (300 32 3 96


eDTU)

Standard Elastic Pools (400 32 3 96


eDTU)

Standard Elastic Pools (800 32 6 192


eDTU)

Standard Elastic Pools (1200 32 10 320


eDTU)

Standard Elastic Pools 32 12 384


(1600-3000 eDTU)

Premium Elastic Pools (all 13.9 12 166.7


DTU configurations)

Next steps
For vCore resource limits for a single database, see resource limits for single databases using the vCore
purchasing model
For DTU resource limits for a single database, see resource limits for single databases using the DTU
purchasing model
For vCore resource limits for elastic pools, see resource limits for elastic pools using the vCore purchasing
model
For resource limits for managed instances in Azure SQL Managed Instance, see SQL Managed Instance
resource limits.
For information about general Azure limits, see Azure subscription and service limits, quotas, and constraints.
For information about resource limits on a logical SQL server, see overview of resource limits on a logical
SQL server for information about limits at the server and subscription levels.
Migration guide: Access to Azure SQL Database
7/12/2022 • 6 minutes to read • Edit Online

In this guide, you learn how to migrate your Microsoft Access database to an Azure SQL database by using SQL
Server Migration Assistant for Access (SSMA for Access).
For other migration guides, see Azure Database Migration Guide.

Prerequisites
Before you begin migrating your Access database to a SQL database, do the following:
Verify that your source environment is supported.
Download and install SQL Server Migration Assistant for Access.
Ensure that you have connectivity and sufficient permissions to access both source and target.

Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
Use SSMA for Access to review database objects and data, and assess databases for migration.
To create an assessment, do the following:
1. Open SSMA for Access.
2. Select File , and then select New Project .
3. Provide a project name and a location for your project and then, in the drop-down list, select Azure SQL
Database as the migration target.
4. Select OK .

5. Select Add Databases , and then select the databases to be added to your new project.
6. On the Access Metadata Explorer pane, right-click a database, and then select Create Repor t .
Alternatively, you can select the Create Repor t tab at the upper right.

7. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Access objects and understand the effort required to
perform schema conversions. The default location for the report is in the report folder within
SSMAProjects. For example:
drive:\<username>\Documents\SSMAProjects\MyAccessMigration\report\report_<date>
Validate the data types
Validate the default data type mappings, and change them based on your requirements, if necessary. To do so:
1. In SSMA for Access, select Tools , and then select Project Settings .
2. Select the Type Mapping tab.

3. You can change the type mapping for each table by selecting the table name on the Access Metadata
Explorer pane.
Convert the schema
To convert database objects, do the following:
1. Select the Connect to Azure SQL Database tab, and then do the following:
a. Enter the details for connecting to your SQL database.
b. In the drop-down list, select your target SQL database. Or you can enter a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .

2. On the Access Metadata Explorer pane, right-click the database, and then select Conver t Schema .
Alternatively, you can select your database and then select the Conver t Schema tab.

3. After the conversion is completed, compare the converted objects to the original objects to identify
potential problems, and address the problems based on the recommendations.
Compare the converted Transact-SQL text to the original code, and review the recommendations.

4. (Optional) To convert an individual object, right-click the object, and then select Conver t Schema .
Converted objects appear in bold text in Access Metadata Explorer :
5. On the Output pane, select the Review results icon, and review the errors on the Error list pane.
6. Save the project locally for an offline schema remediation exercise. To do so, select File > Save Project .
This gives you an opportunity to evaluate the source and target schemas offline and perform remediation
before you publish them to your SQL database.

Migrate the databases


After you've assessed your databases and addressed any discrepancies, you can run the migration process.
Migrating data is a bulk-load operation that moves rows of data into an Azure SQL database in transactions. The
number of rows to be loaded into your SQL database in each transaction is configured in the project settings.
To publish your schema and migrate the data by using SSMA for Access, do the following:
1. If you haven't already done so, select Connect to Azure SQL Database , and provide connection details.
2. Publish the schema. On the Azure SQL Database Metadata Explorer pane, right-click the database
you're working with, and then select Synchronize with Database . This action publishes the MySQL
schema to the SQL database.
3. On the Synchronize with the Database pane, review the mapping between your source project and
your target:
4. On the Access Metadata Explorer pane, select the check boxes next to the items you want to migrate.
To migrate the entire database, select the check box next to the database.
5. Migrate the data. Right-click the database or object you want to migrate, and then select Migrate Data .
Alternatively, you can select the Migrate Data tab at the upper right.
To migrate data for an entire database, select the check box next to the database name. To migrate data
from individual tables, expand the database, expand Tables , and then select the check box next to the
table. To omit data from individual tables, clear the check box.

6. After migration is completed, view the Data Migration Repor t .


7. Connect to your Azure SQL database by using SQL Server Management Studio, and validate the
migration by reviewing the data and schema.

Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.

Migration assets
For more assistance with completing this migration scenario, see the following resource. It was developed in
support of a real-world migration project engagement.

T IT L E DESC RIP T IO N

Data workload assessment model and tool Provides suggested “best fit” target platforms, cloud
readiness, and application/database remediation levels for
specified workloads. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated, uniform target-
platform decision process.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
To learn more about Azure SQL Database see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit (preview).
For information about how to perform Data Access Layer A/B testing, see Overview of Database
Experimentation Assistant.
Migration guide: IBM Db2 to Azure SQL Database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this guide, you learn how to migrate your IBM Db2 databases to Azure SQL Database, by using SQL Server
Migration Assistant for Db2.
For other migration guides, see Azure Database Migration Guides.

Prerequisites
To migrate your Db2 database to SQL Database, you need:
To verify that your source environment is supported.
To download SQL Server Migration Assistant (SSMA) for Db2.
A target database in Azure SQL Database.
Connectivity and sufficient permissions to access both source and target.

Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess and convert
Use SSMA for DB2 to review database objects and data, and assess databases for migration.
To create an assessment, follow these steps:
1. Open SSMA for Db2.
2. Select File > New Project .
3. Provide a project name and a location to save your project. Then select Azure SQL Database as the
migration target from the drop-down list, and select OK .
4. On Connect to Db2 , enter values for the Db2 connection details.

5. Right-click the Db2 schema you want to migrate, and then choose Create repor t . This will generate an
HTML report. Alternatively, you can choose Create repor t from the navigation bar after selecting the
schema.

6. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example: drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date> .
Validate data types
Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow
these steps:
1. Select Tools from the menu.
2. Select Project Settings .
3. Select the Type mappings tab.

4. You can change the type mapping for each table by selecting the table in the Db2 Metadata Explorer .
Convert schema
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose Add
statements .
2. Select Connect to Azure SQL Database .
a. Enter connection details to connect your database in Azure SQL Database.
b. Choose your target SQL Database from the drop-down list, or provide a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .
3. Right-click the schema, and then choose Conver t Schema . Alternatively, you can choose Conver t
Schema from the top navigation bar after selecting your schema.

4. After the conversion completes, compare and review the structure of the schema to identify potential
problems. Address the problems based on the recommendations.
5. In the Output pane, select Review results . In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File menu, select Save
Project . This gives you an opportunity to evaluate the source and target schemas offline, and perform
remediation before you can publish the schema to SQL Database.

Migrate
After you have completed assessing your databases and addressing any discrepancies, the next step is to
execute the migration process.
To publish your schema and migrate your data, follow these steps:
1. Publish the schema. In Azure SQL Database Metadata Explorer , from the Databases node, right-
click the database. Then select Synchronize with Database .
2. Migrate the data. Right-click the database or object you want to migrate in Db2 Metadata Explorer , and
choose Migrate data . Alternatively, you can select Migrate Data from the navigation bar. To migrate
data for an entire database, select the check box next to the database name. To migrate data from
individual tables, expand the database, expand Tables , and then select the check box next to the table. To
omit data from individual tables, clear the check box.

3. Provide connection details for both Db2 and Azure SQL Database.
4. After migration completes, view the Data Migration Repor t .
5. Connect to your database in Azure SQL Database by using SQL Server Management Studio. Validate the
migration by reviewing the data and schema.

Post-migration
After the migration is complete, you need to go through a series of post-migration tasks to ensure that
everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
Testing consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up the test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.

Advanced features
Be sure to take advantage of the advanced cloud-based features offered by SQL Database, such as built-in high
availability, threat detection, and monitoring and tuning your workload.
Some SQL Server features are only available when the database compatibility level is changed to the latest
compatibility level.

Migration assets
For additional assistance, see the following resources, which were developed in support of a real-world
migration project engagement:

A SSET DESC RIP T IO N

Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application/database remediation level
for a given workload. It offers simple, one-click calculation
and report generation that helps to accelerate large estate
assessments by providing and automated and uniform
target platform decision process.

Db2 zOS data assets discovery and assessment package After running the SQL script on a database, you can export
the results to a file on the file system. Several file formats are
supported, including *.csv, so that you can capture the
results in external tools such as spreadsheets. This method
can be useful if you want to easily share results with teams
that do not have the workbench installed.

IBM Db2 LUW inventory scripts and artifacts This asset includes a SQL query that hits IBM Db2 LUW
version 11.1 system tables and provides a count of objects
by schema and object type, a rough estimate of "raw data"
in each schema, and the sizing of tables in each schema, with
results stored in a CSV format.

IBM Db2 to SQL DB - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
For Microsoft and third-party services and tools to assist you with various database and data migration
scenarios, see Service and tools for data migration.
To learn more about Azure SQL Database, see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit.
For details on how to perform data access layer A/B testing, see Database Experimentation Assistant.
Migration guide: Oracle to Azure SQL Database
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This guide teaches you to migrate your Oracle schemas to Azure SQL Database by using SQL Server Migration
Assistant for Oracle (SSMA for Oracle).
For other migration guides, see Azure Database Migration Guides.

IMPORTANT
Try new Database Migration Assessment for Oracle extension in Azure Data Studio for Oracle to SQL pre-assessment and
workload categorization. If you are in early phase of Oracle to SQL migration and would need to do a high level workload
assessment , interested in sizing Azure SQL target for the Oracle workload or understand feature migration parity, try the
new extension. For detailed code assessment and conversion, continue with SSMA for Oracle.

Prerequisites
Before you begin migrating your Oracle schema to SQL Database:
Verify that your source environment is supported.
Download SSMA for Oracle.
Have a target SQL Database instance.
Obtain the necessary permissions for SSMA for Oracle and provider.

Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration. This part of the process involves conducting an inventory of the
databases that you need to migrate, assessing those databases for potential migration issues or blockers, and
then resolving any items you might have uncovered.
Assess
By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate
database objects to SQL Database, and then finally migrate data to the database.
To create an assessment:
1. Open SSMA for Oracle.
2. Select File , and then select New Project .
3. Enter a project name and a location to save your project. Then select Azure SQL Database as the
migration target from the drop-down list and select OK .
4. Select Connect to Oracle . Enter values for Oracle connection details in the Connect to Oracle dialog
box.
5. Select the Oracle schemas you want to migrate.

6. In Oracle Metadata Explorer , right-click the Oracle schema you want to migrate and then select
Create Repor t to generate an HTML report. Instead, you can select a database and then select the
Create Repor t tab.
7. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example, see
drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\ .

Validate the data types


Validate the default data type mappings and change them based on requirements if necessary. To do so, follow
these steps:
1. In SSMA for Oracle, select Tools , and then select Project Settings .
2. Select the Type Mapping tab.
3. You can change the type mapping for each table by selecting the table in Oracle Metadata Explorer .
Convert the schema
To convert the schema:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then select Add
statements .
2. Select the Connect to Azure SQL Database tab.
a. In SQL Database , enter connection details to connect your database.
b. Select your target SQL Database instance from the drop-down list, or enter a new name, in which case
a database will be created on the target server.
c. Enter authentication details, and select Connect .

3. In Oracle Metadata Explorer , right-click the Oracle schema and then select Conver t Schema . Or, you
can select your schema and then select the Conver t Schema tab.
4. After the conversion finishes, compare and review the converted objects to the original objects to identify
potential problems and address them based on the recommendations.

5. Compare the converted Transact-SQL text to the original stored procedures, and review the
recommendations.
6. In the output pane, select Review results and review the errors in the Error List pane.
7. Save the project locally for an offline schema remediation exercise. On the File menu, select Save
Project . This step gives you an opportunity to evaluate the source and target schemas offline and
perform remediation before you publish the schema to SQL Database.

Migrate
After you've assessed your databases and addressed any discrepancies, the next step is to run the migration
process. Migration involves two steps: publishing the schema and migrating the data.
To publish your schema and migrate your data:
1. Publish the schema by right-clicking the database from the Databases node in Azure SQL Database
Metadata Explorer and selecting Synchronize with Database .

2. Review the mapping between your source project and your target.
3. Migrate the data by right-clicking the database or object you want to migrate in Oracle Metadata
Explorer and selecting Migrate Data . Or, you can select the Migrate Data tab. To migrate data for an
entire database, select the check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables , and then select the checkboxes next to the tables. To omit data from
individual tables, clear the checkboxes.

4. Enter connection details for both Oracle and SQL Database.


5. After the migration is completed, view the Data Migration Repor t .
6. Connect to your SQL Database instance by using SQL Server Management Studio, and validate the
migration by reviewing the data and schema.

Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
Getting started with SQL Server Integration Services
SQL Server Integration Services for Azure and Hybrid Data Movement

Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this task will require changes to the applications in some
cases.
The Data Access Migration Toolkit is an extension for Visual Studio Code that allows you to analyze your Java
source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of
what needs to be addressed to support the new database back end. To learn more, see the Migrate your Java
applications from Oracle blog post.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Validate migrated objects
Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database
objects. The SSMA Tester is used to verify that converted objects behave in the same way.
Create test case
1. Open SSMA for Oracle, select Tester followed by New Test Case .

2. Provide the following information for the new test case:


Name: Enter the name to identify the test case.
Creation date: Today's current date, defined automatically.
Last Modified date: Filled in automatically, should not be changed.
Description: Enter any additional information to identify the purpose of the test case.
3. Select the objects that are part of the test case from the Oracle object tree located in the left side.

In this example, stored procedure ADD_REGION and table REGION is selected.


To learn more, see Selecting and configuring objects to test.
4. Next, select the tables, foreign keys, and other dependent objects from the Oracle object tree in the left
window.
To learn more, see Selecting and configuring affected objects.
5. Review the evaluation sequence of objects. Change the order by clicking the buttons in the grid.

6. Finalize the test case by reviewing the information provided in the previous steps. Configure the test
execution options based on the test scenario.
For more information on test case settings,Finishing test case preparation
7. Click on finish to create the test case.

Run test case


When SSMA Tester runs a test case, the test engine executes the objects selected for testing and generates a
verification report.
1. Select the test case from test repository and then click run.
2. Review the launch test case and click run.

3. Next, provide Oracle source credentials. Click connect after entering the credentials.
4. Provide target SQL Server credentials and click connect.

On success, the test case moves to initialization stage.


5. A real-time progress bar shows the execution status of the test run.
6. Review the report after the test is completed. The report provides the statistics, any errors during the test
run and a detail report.

7. Click details to get more information.


Example of positive data validation.
Example of failed data validation.

Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.

NOTE
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.

Migration assets
For more assistance with completing this migration scenario, see the following resources. They were developed
in support of a real-world migration project engagement.

T IT L E/ L IN K DESC RIP T IO N

Data Workload Assessment Model and Tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application or database remediation
level for a given workload. It offers simple, one-click
calculation and report generation that helps to accelerate
large estate assessments by providing an automated and
uniform target platform decision process.

Oracle Inventory Script Artifacts This asset includes a PL/SQL query that hits Oracle system
tables and provides a count of objects by schema type,
object type, and status. It also provides a rough estimate of
raw data in each schema and the sizing of tables in each
schema, with results stored in a CSV format.

Automate SSMA Oracle Assessment Collection & This set of resources uses a .csv file as entry (sources.csv in
Consolidation the project folders) to produce the xml files that are needed
to run an SSMA assessment in console mode. The source.csv
is provided by the customer based on an inventory of
existing Oracle instances. The output files are
AssessmentReportGeneration_source_1.xml,
ServersConnectionFile.xml, and VariableValueFile.xml.

Oracle to SQL DB - Database Compare utility SSMA for Oracle Tester is the recommended tool to
automatically validate the database object conversion and
data migration, and it's a superset of Database Compare
functionality.

If you're looking for an alternative data validation option,


you can use the Database Compare utility to compare data
down to the row or column level in all or selected tables,
rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Services and tools for data migration.
To learn more about SQL Database, see:
An overview of Azure SQL Database
Azure Total Cost of Ownership (TCO) Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
For video content, see:
Overview of the migration journey and the tools and services recommended for performing
assessment and migration
Migration guide: MySQL to Azure SQL Database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this guide, you learn how to migrate your MySQL database to an Azure SQL database by using SQL Server
Migration Assistant for MySQL (SSMA for MySQL).
For other migration guides, see Azure Database Migration Guide.

Prerequisites
Before you begin migrating your MySQL database to a SQL database, do the following:
Verify that your source environment is supported. Currently, MySQL 5.6 and 5.7 are supported.
Download and install SQL Server Migration Assistant for MySQL.
Ensure that you have connectivity and sufficient permissions to access both the source and the target.

Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
Use SQL Server Migration Assistant (SSMA) for MySQL to review database objects and data, and assess
databases for migration.
To create an assessment, do the following:
1. Open SSMA for MySQL.
2. Select File , and then select New Project .
3. In the New Project pane, enter a name and location for your project and then, in the Migrate To drop-
down list, select Azure SQL Database .
4. Select OK .
5. Select the Connect to MySQL tab, and then provide details for connecting your MySQL server.

6. On the MySQL Metadata Explorer pane, right-click the MySQL schema, and then select Create
Repor t . Alternatively, you can select the Create Repor t tab at the upper right.
7. Review the HTML report to understand the conversion statistics, errors, and warnings. Analyze it to
understand the conversion issues and resolutions. You can also open the report in Excel to get an
inventory of MySQL objects and understand the effort that's required to perform schema conversions.
The default location for the report is in the report folder within SSMAProjects. For example:
drive:\Users\<username>\Documents\SSMAProjects\MySQLMigration\report\report_2016_11_12T02_47_55\

Validate the data types


Validate the default data type mappings and change them based on requirements, if necessary. To do so:
1. Select Tools , and then select Project Settings .
2. Select the Type Mappings tab.
3. You can change the type mapping for each table by selecting the table name on the MySQL Metadata
Explorer pane.
Convert the schema
To convert the schema, do the following:
1. (Optional) To convert dynamic or specialized queries, right-click the node, and then select Add
statement .
2. Select the Connect to Azure SQL Database tab, and then do the following:
a. Enter the details for connecting to your SQL database.
b. In the drop-down list, select your target SQL database. Or you can provide a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .

3. Right-click the schema you're working with, and then select Conver t Schema . Alternatively, you can
select the Conver t schema tab at the upper right.

4. After the conversion is completed, review and compare the converted objects to the original objects to
identify potential problems and address them based on the recommendations.

Compare the converted Transact-SQL text to the original code, and review the recommendations.
5. On the Output pane, select Review results , and then review any errors on the Error list pane.
6. Save the project locally for an offline schema remediation exercise. To do so, select File > Save Project .
This gives you an opportunity to evaluate the source and target schemas offline and perform remediation
before you publish the schema to your SQL database.
Compare the converted procedures to the original procedures, as shown here:

Migrate the databases


After you've assessed your databases and addressed any discrepancies, you can run the migration process.
Migration involves two steps: publishing the schema and migrating the data.
To publish the schema and migrate the data, do the following:
1. Publish the schema. On the Azure SQL Database Metadata Explorer pane, right-click the database,
and then select Synchronize with Database . This action publishes the MySQL schema to your SQL
database.
2. Migrate the data. On the MySQL Metadata Explorer pane, right-click the MySQL schema you want to
migrate, and then select Migrate Data . Alternatively, you can select the Migrate Data tab at the upper
right.
To migrate data for an entire database, select the check box next to the database name. To migrate data
from individual tables, expand the database, expand Tables , and then select the check box next to the
table. To omit data from individual tables, clear the check box.

3. After the migration is completed, view the Data Migration Repor t .


4. Connect to your SQL database by using SQL Server Management Studio and validate the migration by
reviewing the data and schema.

Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.

Migration assets
For more assistance with completing this migration scenario, see the following resource. It was developed in
support of a real-world migration project engagement.

T IT L E DESC RIP T IO N

Data workload assessment model and tool Provides suggested “best fit” target platforms, cloud
readiness, and application/database remediation levels for
specified workloads. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated, uniform target-
platform decision process.

MySQL to SQL DB - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
To help estimate the cost savings you can realize by migrating your workloads to Azure, see the Azure
total cost of ownership calculator.
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
For other migration guides, see Azure Database Migration Guide.
For migration videos, see Overview of the migration journey and recommended migration and
assessment tools and services.
For more cloud migration resources, see cloud migration solutions.
Migration guide: SAP ASE to Azure SQL Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this guide, you learn how to migrate your SAP Adapter Server Enterprise (ASE) databases to an Azure SQL
database by using SQL Server Migration Assistant for SAP Adapter Server Enterprise.
For other migration guides, see Azure Database Migration Guide.

Prerequisites
Before you begin migrating your SAP SE database to your SQL database, do the following:
Verify that your source environment is supported.
Download and install SQL Server Migration Assistant for SAP Adaptive Server Enterprise (formerly SAP
Sybase ASE).
Ensure that you have connectivity and sufficient permissions to access both source and target.

Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your Azure cloud migration.
Assess
By using SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (formally SAP Sybase ASE),
you can review database objects and data, assess databases for migration, migrate Sybase database objects to
your SQL database, and then migrate data to the SQL database. To learn more, see SQL Server Migration
Assistant for Sybase (SybaseToSQL).
To create an assessment, do the following:
1. Open SSMA for Sybase.
2. Select File , and then select New Project .
3. In the New Project pane, enter a name and location for your project and then, in the Migrate To drop-
down list, select Azure SQL Database .
4. Select OK .
5. On the Connect to Sybase pane, enter the SAP connection details.
6. Right-click the SAP database you want to migrate, and then select Create repor t . This generates an
HTML report. Alternatively, you can select the Create repor t tab at the upper right.
7. Review the HTML report to understand the conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of SAP ASE objects and the effort that's required to perform
schema conversions. The default location for the report is in the report folder within SSMAProjects. For
example:
drive:\<username>\Documents\SSMAProjects\MySAPMigration\report\report_<date>

Validate the type mappings


Before you perform schema conversion, validate the default data-type mappings or change them based on
requirements. You can do so by selecting Tools > Project Settings , or you can change the type mapping for
each table by selecting the table in the SAP ASE Metadata Explorer .
Convert the schema
To convert the schema, do the following:
1. (Optional) To convert dynamic or specialized queries, right-click the node, and then select Add
statement .
2. Select the Connect to Azure SQL Database tab, and then enter the details for your SQL database. You
can choose to connect to an existing database or provide a new name, in which case a database will be
created on the target server.
3. On the Sybase Metadata Explorer pane, right-click the SAP ASE schema you're working with, and then
select Conver t Schema .
4. After the schema has been converted, compare and review the converted structure to the original
structure identify potential problems.
After the schema conversion, you can save this project locally for an offline schema remediation exercise.
To do so, select File > Save Project . This gives you an opportunity to evaluate the source and target
schemas offline and perform remediation before you publish the schema to your SQL database.
5. On the Output pane, select Review results , and review any errors in the Error list pane.
6. Save the project locally for an offline schema remediation exercise. To do so, select File > Save Project .
This gives you an opportunity to evaluate the source and target schemas offline and perform remediation
before you publish the schema to your SQL database.

Migrate the databases


After you have the necessary prerequisites in place and have completed the tasks associated with the pre-
migration stage, you're ready to run the schema and data migration.
To publish the schema and migrate the data, do the following:
1. Publish the schema. On the Azure SQL Database Metadata Explorer pane, right-click the database,
and then select Synchronize with Database . This action publishes the SAP ASE schema to your SQL
database.
2. Migrate the data. On the SAP ASE Metadata Explorer pane, right-click the SAP ASE database or object
you want to migrate, and then select Migrate Data . Alternatively, you can select the Migrate Data tab
at the upper right.
To migrate data for an entire database, select the check box next to the database name. To migrate data
from individual tables, expand the database, expand Tables , and then select the check box next to the
table. To omit data from individual tables, clear the check box.
3. After the migration is completed, view the Data Migration Repor t .
4. Validate the migration by reviewing the data and schema. To do so, connect to your SQL database by
using SQL Server Management Studio.

Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create
the validation queries to run against both the source and target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.

Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Service and tools for data migration.
To learn more about Azure SQL Database, see:
An overview of SQL Database
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
To assess the application access layer, see Data Access Migration Toolkit (preview).
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Migration overview: SQL Server to Azure SQL
Database
7/12/2022 • 14 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Learn about the options and considerations for migrating your SQL Server databases to Azure SQL Database.
You can migrate existing SQL Server databases running on:
SQL Server on-premises.
SQL Server on Azure Virtual Machines.
Amazon Web Services (AWS) Elastic Compute Cloud (EC2).
AWS Relational Database Service (RDS).
Compute Engine in Google Cloud Platform (GCP).
Cloud SQL for SQL Server in GCP.
For other migration guides, see Database Migration.

Overview
Azure SQL Database is a recommended target option for SQL Server workloads that require a fully managed
platform as a service (PaaS). SQL Database handles most database management functions. It also has built-in
high availability, intelligent query processing, scalability, and performance capabilities to suit many application
types.
SQL Database provides flexibility with multiple deployment models and service tiers that cater to different types
of applications or workloads.
One of the key benefits of migrating to SQL Database is that you can modernize your application by using the
PaaS capabilities. You can then eliminate any dependency on technical components that are scoped at the
instance level, such as SQL Agent jobs.
You can also save costs by using the Azure Hybrid Benefit for SQL Server to migrate your SQL Server on-
premises licenses to Azure SQL Database. This option is available if you choose the vCore-based purchasing
model.
Be sure to review the SQL Server database engine features available in Azure SQL Database to validate the
supportability of your migration target.

Considerations
The key factors to consider when you're evaluating migration options are:
Number of servers and databases
Size of databases
Acceptable business downtime during the migration process
The migration options listed in this guide take these factors into account. For logical data migration to Azure
SQL Database, the time to migrate can depend on both the number of objects in a database and the size of the
database.
Tools are available for various workloads and user preferences. Some tools can be used to perform a quick
migration of a single database through a UI-based tool. Other tools can automate the migration of multiple
databases to handle migrations at scale.

Choose an appropriate target


Consider general guidelines to help you choose the right deployment model and service tier of Azure SQL
Database. You can choose compute and storage resources during deployment and then change them afterward
by using the Azure portal without incurring downtime for your application.
Deployment models : Understand your application workload and the usage pattern to decide between a single
database or an elastic pool.
A single database represents a fully managed database that's suitable for most modern cloud applications
and microservices.
An elastic pool is a collection of single databases with a shared set of resources, such as CPU or memory. It's
suitable for combining databases in a pool with predictable usage patterns that can effectively share the
same set of resources.
Purchasing models : Choose between the vCore, database transaction unit (DTU), or serverless purchasing
models.
The vCore model lets you choose the number of vCores for Azure SQL Database, so it's the easiest choice
when you're translating from on-premises SQL Server. This is the only option that supports saving license
costs with the Azure Hybrid Benefit.
The DTU model abstracts the underlying compute, memory, and I/O resources to provide a blended DTU.
The serverless model is for workloads that require automatic on-demand scaling with compute resources
billed per second of usage. The serverless compute tier automatically pauses databases during inactive
periods (where only storage is billed). It automatically resumes databases when activity returns.
Ser vice tiers : Choose between three service tiers designed for different types of applications.
General Purpose/standard service tier offers a balanced budget-oriented option with compute and storage
suitable to deliver applications in the middle and lower tiers. Redundancy is built in at the storage layer to
recover from failures. It's designed for most database workloads.
Business Critical/premium service tier is for high-tier applications that require high transaction rates, low-
latency I/O, and a high level of resiliency. Secondary replicas are available for failover and to offload read
workloads.
Hyperscale service tier is for databases that have growing data volumes and need to automatically scale up
to 100 TB in database size. It's designed for very large databases.

IMPORTANT
Transaction log rate is governed in Azure SQL Database to limit high ingestion rates. As such, during migration, you might
have to scale target database resources (vCores or DTUs) to ease pressure on CPU or throughput. Choose the
appropriately sized target database, but plan to scale resources up for the migration if necessary.

SQL Server VM alternative


Your business might have requirements that make SQL Server on Azure Virtual Machines a more suitable target
than Azure SQL Database.
If one of the following conditions applies to your business, consider moving to a SQL Server virtual machine
(VM) instead:
You require direct access to the operating system or file system, such as to install third-party or custom
agents on the same virtual machine with SQL Server.
You have strict dependency on features that are still not supported, such as FileStream/FileTable, PolyBase,
and cross-instance transactions.
You need to stay at a specific version of SQL Server (2012, for example).
Your compute requirements are much lower than a managed instance offers (one vCore, for example), and
database consolidation is not an acceptable option.

Migration tools
We recommend the following migration tools:

T EC H N O LO GY DESC RIP T IO N

Azure Migrate This Azure service helps you discover and assess your SQL
data estate at scale on VMware. It provides Azure SQL
deployment recommendations, target sizing, and monthly
estimates.

Data Migration Assistant This desktop tool from Microsoft provides seamless
assessments of SQL Server and single-database migrations
to Azure SQL Database (both schema and data).

The tool can be installed on a server on-premises or on your


local machine that has connectivity to your source
databases. The migration process is a logical data movement
between objects in the source and target databases.

Azure Database Migration Service This Azure service can migrate SQL Server databases to
Azure SQL Database through the Azure portal or
automatically through PowerShell. Database Migration
Service requires you to select a preferred Azure virtual
network during provisioning to ensure connectivity to your
source SQL Server databases. You can migrate single
databases or at scale.

The following table lists alternative migration tools:

T EC H N O LO GY DESC RIP T IO N

Transactional replication Replicate data from source SQL Server database tables to
Azure SQL Database by providing a publisher-subscriber
type migration option while maintaining transactional
consistency. Incremental data changes are propagated to
subscribers as they occur on the publishers.

Import Export Service/BACPAC BACPAC is a Windows file with a .bacpac extension that
encapsulates a database's schema and data. You can use
BACPAC to both export data from a SQL Server source and
import the data into Azure SQL Database. A BACPAC file can
be imported to a new SQL database through the Azure
portal.

For scale and performance with large databases sizes or a


large number of databases, consider using the SqlPackage
command-line tool to export and import databases.
T EC H N O LO GY DESC RIP T IO N

Bulk copy The bulk copy program (bcp) tool copies data from an
instance of SQL Server into a data file. Use the tool to export
the data from your source and import the data file into the
target SQL database.

For high-speed bulk copy operations to move data to Azure


SQL Database, you can use the Smart Bulk Copy tool to
maximize transfer speed by taking advantage of parallel copy
tasks.

Azure Data Factory The Copy activity in Azure Data Factory migrates data from
source SQL Server databases to Azure SQL Database by
using built-in connectors and an integration runtime.

Data Factory supports a wide range of connectors to move


data from SQL Server sources to Azure SQL Database.

SQL Data Sync SQL Data Sync is a service built on Azure SQL Database that
lets you synchronize selected data bidirectionally across
multiple databases, both on-premises and in the cloud.
Data Sync is useful in cases where data needs to be kept
updated across several databases in Azure SQL Database or
SQL Server.

Compare migration options


Compare migration options to choose the path that's appropriate to your business needs.
The following table compares the migration options that we recommend:

M IGRAT IO N O P T IO N W H EN TO USE C O N SIDERAT IO N S

Data Migration Assistant - Migrate single databases (both - Migration activity performs data
schema and data). movement between database objects
- Can accommodate downtime during (from source to target), so we
the data migration process. recommend that you run it during off-
peak times.
Supported sources: - Data Migration Assistant reports the
- SQL Server (2005 to 2019) on- status of migration per database
premises or Azure VM object, including the number of rows
- AWS EC2 migrated.
- AWS RDS - For large migrations (number of
- GCP Compute SQL Server VM databases or size of database), use
Azure Database Migration Service.

Azure Database Migration Service - Migrate single databases or at scale. - Migrations at scale can be
- Can run in both online (minimal automated via PowerShell.
downtime) and offline modes. - Time to complete migration depends
on database size and the number of
Supported sources: objects in the database.
- SQL Server (2005 to 2019) on- - Requires the source database to be
premises or Azure VM set as read-only.
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM

The following table compares the alternative migration options:


M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S

Transactional replication - Migrate by continuously publishing - Setup is relatively complex compared


changes from source database tables to other migration options.
to target SQL Database tables. - Provides a continuous replication
- Do full or partial database migrations option to migrate data (without taking
of selected tables (subset of a the databases offline).
database). - Transactional replication has
limitations to consider when you're
Supported sources: setting up the publisher on the source
- SQL Server (2016 to 2019) with SQL Server instance. See Limitations
some limitations on publishing objects to learn more.
- AWS EC2 - It's possible to monitor replication
- GCP Compute SQL Server VM activity.

Import Export Service/BACPAC - Migrate individual line-of-business - Requires downtime because data
application databases. needs to be exported at the source
- Suited for smaller databases. and imported at the destination.
- Does not require a separate - The file formats and data types used
migration service or tool. in the export or import need to be
consistent with table schemas to avoid
Supported sources: truncation or data-type mismatch
- SQL Server (2005 to 2019) on- errors.
premises or Azure VM - Time taken to export a database with
- AWS EC2 a large number of objects can be
- AWS RDS significantly higher.
- GCP Compute SQL Server VM

Bulk copy - Do full or partial data migrations. - Requires downtime for exporting
- Can accommodate downtime. data from the source and importing
into the target.
Supported sources: - The file formats and data types used
- SQL Server (2005 to 2019) on- in the export or import need to be
premises or Azure VM consistent with table schemas.
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM

Azure Data Factory - Migrate and/or transform data from - Requires creating data movement
source SQL Server databases. pipelines in Data Factory to move data
- Merging data from multiple sources from source to destination.
of data to Azure SQL Database is - Cost is an important consideration
typically for business intelligence (BI) and is based on factors like pipeline
workloads. triggers, activity runs, and duration of
data movement.

SQL Data Sync - Synchronize data between source - Azure SQL Database must be the
and target databases. hub database for sync with an on-
- Suitable to run continuous sync premises SQL Server database as a
between Azure SQL Database and on- member database.
premises SQL Server in a bidirectional - Compared to transactional
flow. replication, SQL Data Sync supports
bidirectional data sync between on-
premises and Azure SQL Database.
- Can have a higher performance
impact, depending on the workload.

Feature interoperability
There are more considerations when you're migrating workloads that rely on other SQL Server features.
SQL Server Integration Services
Migrate SQL Server Integration Services (SSIS) packages to Azure by redeploying the packages to the Azure-
SSIS runtime in Azure Data Factory. Azure Data Factory supports migration of SSIS packages by providing a
runtime built to run SSIS packages in Azure. Alternatively, you can rewrite the SSIS ETL (extract, transform, load)
logic natively in Azure Data Factory by using data flows.
SQL Server Reporting Services
Migrate SQL Server Reporting Services (SSRS) reports to paginated reports in Power BI. Use theRDL Migration
Tool to help prepare and migrate your reports. Microsoft developed this tool to help customers migrate Report
Definition Language (RDL) reports from their SSRS servers to Power BI. It's available on GitHub, and it
documents an end-to-end walkthrough of the migration scenario.
High availability
Manual setup of SQL Server high-availability features like Always On failover cluster instances and Always On
availability groups becomes obsolete on the target SQL database. High-availability architecture is already built
into both General Purpose (standard availability model) and Business Critical (premium availability model)
service tiers for Azure SQL Database. The Business Critical/premium service tier also provides read scale-out
that allows connecting into one of the secondary nodes for read-only purposes.
Beyond the high-availability architecture that's included in Azure SQL Database, the auto-failover groups feature
allows you to managethe replication and failover of databases in a managed instance to another region.
Logins and groups
Windows logins are not supported in Azure SQL Database, create an Azure Active Directory login instead.
Manually recreate any SQL logins.
SQL Agent jobs
SQL Agent jobs are not directly supported in Azure SQL Database and need to be deployed to elastic database
jobs (preview).
System databases
For Azure SQL Database, the only applicable system databases are master and tempdb. To learn more, see
Tempdb in Azure SQL Database.

Advanced features
Be sure to take advantage of the advanced cloud-based features in SQL Database. For example, you don't need
to worry about managing backups because the service does it for you. You can restore to any point in time
within the retention period.
To strengthen security, consider usingAzure AD authentication, auditing,threat detection,row-level security,
anddynamic data masking.
In addition to advanced management and security features, SQL Database provides tools that can help you
monitor and tune your workload. Azure SQL Analytics (Preview) is an advanced solution for monitoring the
performance of all of your databases in Azure SQL Database at scale and across multiple subscriptions in a
single view. Azure SQL Analytics collects and visualizes key performance metrics with built-in intelligence for
performance troubleshooting.
Automatic tuningcontinuously monitors performance of your SQL execution plan and automatically fixes
identified performance issues.

Migration assets
For more assistance, see the following resources that were developed for real-world migration projects.
A SSET DESC RIP T IO N

Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and an application/database remediation
level for a workload. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated and uniform
decision process for target platforms.

Bulk database creation with PowerShell You can use a set of three PowerShell scripts that create a
resource group (create_rg.ps1), the logical server in Azure
(create_sqlserver.ps1), and a SQL database
(create_sqldb.ps1). The scripts include loop capabilities so
you can iterate and create as many servers and databases as
necessary.

Bulk schema deployment with MSSQL-Scripter and This asset creates a resource group, creates one or multiple
PowerShell logical servers in Azure to host Azure SQL Database, exports
every schema from an on-premises SQL Server instance (or
multiple SQL Server 2005+ instances), and imports the
schemas to Azure SQL Database.

Convert SQL Server Agent jobs into elastic database jobs This script migrates your source SQL Server Agent jobs to
elastic database jobs.

Utility to move on-premises SQL Server logins to Azure SQL A PowerShell script can create a T-SQL command script to
Database re-create logins and select database users from on-premises
SQL Server to Azure SQL Database. The tool allows
automatic mapping of Windows Server Active Directory
accounts to Azure AD accounts, along with optionally
migrating SQL Server native logins.

Perfmon data collection automation by using Logman You can use the Logman tool to collect Perfmon data (to
help you understand baseline performance) and get
migration target recommendations. This tool uses
logman.exe to create the command that will create, start,
stop, and delete performance counters set on a remote SQL
Server instance.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
To start migrating your SQL Server databases to Azure SQL Database, see the SQL Server to Azure SQL
Database migration guide.
For a matrix of services and tools that can help you with database and data migration scenarios as well as
specialty tasks, see Services and tools for data migration.
To learn more about SQL Database, see:
Overview of Azure SQL Database
Azure Total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit (Preview).
For details on how to perform A/B testing for the data access layer, see Database Experimentation
Assistant.
Migration guide: SQL Server to Azure SQL
Database
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this guide, you learn how to migrate your SQL Server instance to Azure SQL Database.
You can migrate SQL Server running on-premises or on:
SQL Server on Virtual Machines
Amazon Web Services (AWS) EC2
Amazon Relational Database Service (AWS RDS)
Compute Engine (Google Cloud Platform - GCP)
Cloud SQL for SQL Server (Google Cloud Platform – GCP)
For more migration information, see the migration overview. For other migration guides, see Database
Migration.

Prerequisites
For your SQL Server migration to Azure SQL Database, make sure you have:
Chosen migration method and corresponding tools .
Installed Data Migration Assistant (DMA) on a machine that can connect to your source SQL Server.
Created a target Azure SQL Database.
Configured connectivity and proper permissions to access both source and target.
Reviewed the database engine features available in Azure SQL Database.

Pre-migration
After you've verified that your source environment is supported, start with the pre-migration stage. Discover all
of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent
your Azure cloud migration.
Discover
In the Discover phase, scan the network to identify all SQL Server instances and features used by your
organization.
Use Azure Migrate to assess migration suitability of on-premises servers, perform performance-based sizing,
and provide cost estimations for running them in Azure.
Alternatively, use theMicrosoft Assessment and Planning Toolkit(the "MAP Toolkit") to assess your current IT
infrastructure. The toolkit provides a powerful inventory, assessment, and reporting tool to simplify the
migration planning process.
For more information about tools available to use for the Discover phase, see Services and tools available for
data migration scenarios.
Assess

NOTE
If you are assessing the entire SQL Server data estate at scale on VMWare, use Azure Migrate to get Azure SQL
deployment recommendations, target sizing, and monthly estimates.

After data sources have been discovered, assess any on-premises SQL Server database(s) that can be migrated
to Azure SQL Database to identify migration blockers or compatibility issues.
You can use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
Azure target recommendations
Azure SKU recommendations
To assess your environment using the Database Migration Assessment, follow these steps:
1. Open the Data Migration Assistant (DMA).
2. Select File and then choose New assessment .
3. Specify a project name, selectSQL Serveras the source server type, and then selectAzure SQL Databaseas the
target server type.
4. Select the type(s) of assessment reports that you want to generate. For example, database compatibility and
feature parity. Based on the type of assessment, the permissions required on the source SQL Server can be
different. DMA will highlight the permissions required for the chosen advisor before running the assessment.
The feature parity category provides a comprehensive set of recommendations, alternatives
available in Azure, and mitigating steps to help you plan your migration project. (sysadmin
permissions required)
The compatibility issues category identifies partially supported or unsupported feature
compatibility issues that might block migration as well as recommendations to address them (
CONNECT SQL , VIEW SERVER STATE , and VIEW ANY DEFINITION permissions required).
5. Specify the source connection details for your SQL Server and connect to the source database.
6. Select Star t assessment .
7. After the process completes, select and review the assessment reports for migration blocking and feature
parity issues. The assessment report can also be exported to a file that can be shared with other teams or
personnel in your organization.
8. Determine the database compatibility level that minimizes post-migration efforts.
9. Identify the best Azure SQL Database SKU for your on-premises workload.
To learn more, see Perform a SQL Server migration assessment with Data Migration Assistant.
If the assessment encounters multiple blockers to confirm that your database it not ready for an Azure SQL
Database migration, then alternatively consider:
Azure SQL Managed Instance if there are multiple instance-scoped dependencies
SQL Server on Azure Virtual Machines if both SQL Database and SQL Managed Instance fail to be suitable
targets.
Scaled Assessments and Analysis
Data Migration Assistant supports performing scaled assessments and consolidation of the assessment reports
for analysis.
If you have multiple servers and databases that need to be assessed and analyzed at scale to provide a wider
view of the data estate, see the following links to learn more:
Performing scaled assessments using PowerShell
Analyzing assessment reports using Power BI

IMPORTANT
Running assessments at scale for multiple databases, especially large ones, can also be automated using the DMA
Command Line Utility and uploaded to Azure Migrate for further analysis and target readiness.

Migrate
After you have completed tasks associated with thePre-migrationstage, you are ready to perform the schema
and data migration.
Migrate your data using your chosen migration method.
This guide describes the two most popular options - Data Migration Assistant and Azure Database Migration
Service.
Data Migration Assistant (DMA )
To migrate a database from SQL Server to Azure SQL Database using DMA, follow these steps:
1. Download and install the Database Migration Assistant.
2. Create a new project and select Migration as the project type.
3. Set the source server type to SQL Ser ver and the target server type to Azure SQL Database , select the
migration scope as Schema and data and select Create .
4. In the migration project, specify the source server details such as the server name, credentials to connect to
the server and the source database to migrate.
5. In the target server details, specify the Azure SQL Database server name, credentials to connect to the server
and the target database to migrate to.
6. Select the schema objects and deploy them to the target Azure SQL Database.
7. Finally, select Star t data migration and monitor the progress of migration.
For a detailed tutorial, see Migrate on-premises SQL Server or SQL Server on Azure VMs to Azure SQL
Database using the Data Migration Assistant.

NOTE
Scale your database to a higher service tier and compute size during the import process to maximize import speed by
providing more resources. You can then scale down after the import is successful.
The compatibility level of the imported database is based on the compatibility level of your source database.

Azure Database Migration Service (DMS )


To migrate databases from SQL Server to Azure SQL Database using DMS, follow the steps below:
1. If you haven't already, register the Microsoft.DataMigration resource provider in your subscription.
2. Create an Azure Database Migration Service Instance in a desired location of your choice (preferably in the
same region as your target Azure SQL Database). Select an existing virtual network or create a new one to
host your DMS instance.
3. After your DMS instance is created, create a new migration project and specify the source server type as SQL
Ser ver and the target server type as Azure SQL Database . Choose Offline data migration as the activity
type in the migration project creation blade.
4. Specify the source SQL Server details on the Migration source details page and the target Azure SQL
Database details on the Migration target details page.
5. Map the source and target databases for migration and then select the tables you want to migrate.
6. Review the migration summary and select Run migration . You can then monitor the migration activity and
check the progress of your database migration.
For a detailed tutorial, see Migrate SQL Server to an Azure SQL Database using DMS.

Data sync and cutover


When using migration options that continuously replicate / sync data changes from source to the target, the
source data and schema can change and drift from the target. During data sync, ensure that all changes on the
source are captured and applied to the target during the migration process.
After you verify that data is same on both the source and the target, you can cutover from the source to the
target environment. It is important to plan the cutover process with business / application teams to ensure
minimal interruption during cutover does not affect business continuity.

IMPORTANT
For details on the specific steps associated with performing a cutover as part of migrations using DMS, see Performing
migration cutover.

Migration recommendations
To speed up migration to Azure SQL Database, you should consider the following recommendations:

RESO URC E C O N T EN T IO N REC O M M EN DAT IO N

Source (typically on premises) Primary bottleneck during migration in Based on DATA IO and DATA file
source is DATA I/O and latency on latency and depending on whether it’s
DATA file which needs to be monitored a virtual machine or physical server,
carefully. you will have to engage storage admin
and explore options to mitigate the
bottleneck.

Target (Azure SQL Database) Biggest limiting factor is the log To speed up migration, scale up the
generation rate and latency on log file. target SQL DB to Business Critical
With Azure SQL Database, you can get Gen5 8 vCore to get the maximum log
a maximum of 96-MB/s log generation generation rate of 96 MB/s and also
rate. achieve low latency for log file. The
Hyperscale service tier provides 100-
MB/s log rate regardless of chosen
service level
RESO URC E C O N T EN T IO N REC O M M EN DAT IO N

Network Network bandwidth needed is equal to Depending on network connectivity


max log ingestion rate 96 MB/s (768 from your on-premises data center to
Mb/s) Azure, check your network bandwidth
(typically Azure ExpressRoute) to
accommodate for the maximum log
ingestion rate.

Vir tual machine used for Data CPU is the primary bottleneck for the Things to consider to speed up data
Migration Assistant (DMA) virtual machine running DMA migration by using
- Azure compute intensive VMs
- Use at least F8s_v2 (8 vcore) VM for
running DMA
- Ensure the VM is running in the
same Azure region as target

Azure Database Migration Compute resource contention and Use Premium 4 vCore. DMS
Ser vice (DMS) database objects consideration for automatically takes care of database
DMS objects like foreign keys, triggers,
constraints, and non-clustered indexes
and doesn't need manual intervention.

Post-migration
After you have successfully completed themigrationstage, go through a series of post-migration tasks to ensure
that everything is functioning smoothly and efficiently.
The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, as well
as addressing performance issues with the workload.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will, in some cases, require changes to the applications.
Perform tests
The test approach for database migration consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance test against the source and the target, and then analyze and
compare the results.

Leverage advanced features


Be sure to take advantage of the advanced cloud-based features offered by SQL Database, such as built-in high
availability, threat detection, and monitoring and tuning your workload.
Some SQL Server features are only available once the database compatibility level is changed to the latest
compatibility level (150).
To learn more, see managing Azure SQL Database after migration
Next steps
For a matrix of the Microsoft and third-party services and tools that are available to assist you with
various database and data migration scenarios as well as specialty tasks, see Service and tools for data
migration.
To learn more about Azure Migrate see
Azure Migrate
To learn more about SQL Database see:
An Overview of Azure SQL Database
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
Cloud Migration Resources
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Assessment rules for SQL Server to Azure SQL
Database migration
7/12/2022 • 16 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Migration tools validate your source SQL Server instance by running a number of assessment rules to identify
issues that must be addressed before migrating your SQL Server database to Azure SQL Database.
This article provides a list of the rules used to assess the feasibility of migrating your SQL Server database to
Azure SQL Database.

Rules Summary
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

AgentJobs Instance Warning SQL Server Agent jobs


aren't available in Azure
SQL Database.

BulkInsert Database Issue BULK INSERT with non-


Azure blob data source isn't
supported in Azure SQL
Database.

ClrAssemblies Database Issue SQL CLR assemblies aren't


supported in Azure SQL
Database.

ComputeClause Database Warning COMPUTE clause is no


longer supported and has
been removed.

CrossDatabaseReferences Database Issue Cross-database queries


aren't supported in Azure
SQL Database.

CryptographicProvider Database Issue A use of CREATE


CRYPTOGRAPHIC
PROVIDER or ALTER
CRYPTOGRAPHIC
PROVIDER was found,
which isn't supported in
Azure SQL Database.

DatabaseMail Instance Warning Database Mail isn't


supported in Azure SQL
Database.

DatabasePrincipalAlias Database Issue SYS.DATABASE_PRINCIPAL_


ALIASES is no longer
supported and has been
removed.
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

DbCompatLevelLowerThan Database Warning Azure SQL Database


100 doesn’t support
compatibility levels below
100.

DisableDefCNSTCHK Database Issue SET option


DISABLE_DEF_CNST_CHK is
no longer supported and
has been removed.

FastFirstRowHint Database Warning FASTFIRSTROW query hint


is no longer supported and
has been removed.

FileStream Database Issue Filestream isn't supported


in Azure SQL Database.

LinkedServer Database Issue Linked server functionality


isn't supported in Azure
SQL Database.

MSDTCTransactSQL Database Issue BEGIN DISTRIBUTED


TRANSACTION isn't
supported in Azure SQL
Database.

NextColumn Database Issue Tables and Columns named


NEXT will lead to an error In
Azure SQL Database.

NonANSILeftOuterJoinSynt Database Warning Non-ANSI style left outer


ax join is no longer supported
and has been removed.

NonANSIRightOuterJoinSyn Database Warning Non-ANSI style right outer


tax join is no longer supported
and has been removed.

OpenRowsetWithNonBlobD Database Issue OpenRowSet used in bulk


ataSourceBulk operation with non-Azure
blob storage data source
isn't supported in Azure
SQL Database.

OpenRowsetWithSQLAndN Database Issue OpenRowSet with SQL or


onSQLProvider non-SQL provider isn't
supported in Azure SQL
Database.

RAISERROR Database Warning Legacy style RAISERROR


calls should be replaced
with modern equivalents.

ServerAudits Instance Warning Server Audits isn't


supported in Azure SQL
Database.
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

ServerCredentials Instance Warning Server scoped credential


isn't supported in Azure
SQL Database.

ServerScopedTriggers Instance Warning Server-scoped trigger isn't


supported in Azure SQL
Database.

ServiceBroker Database Issue Service Broker feature isn't


supported in Azure SQL
Database.

SQLDBDatabaseSize Database Issue Azure SQL Database does


not support database size
greater than 100 TB.

SqlMail Database Warning SQL Mail has been


discontinued.

SystemProcedures110 Database Warning Detected statements that


reference removed system
stored procedures that
aren't available in Azure
SQL Database.

TraceFlags Instance Warning Azure SQL Database does


not support trace flags.

WindowsAuthentication Instance Warning Database users mapped


with Windows
authentication (integrated
security) aren't supported in
Azure SQL Database.

XpCmdshell Database Issue xp_cmdshell isn't supported


in Azure SQL Database.

Bulk insert
Title: BULK INSERT with non-Azure blob data source isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
Azure SQL Database cannot access file shares or Windows folders. See the "Impacted Objects" section for the
specific uses of BULK INSERT statements that do not reference an Azure blob. Objects with 'BULK INSERT' where
the source isn't Azure blob storage will not work after migrating to Azure SQL Database.
Recommendation
You will need to convert BULK INSERT statements that use local files or file shares to use files from Azure blob
storage instead, when migrating to Azure SQL Database. Alternatively, migrate to SQL Server on Azure Virtual
Machine.

Compute clause
Title: COMPUTE clause is no longer suppor ted and has been removed.
Categor y : Warning
Description
The COMPUTE clause generates totals that appear as additional summary columns at the end of the result set.
However, this clause is no longer supported in Azure SQL Database.
Recommendation
The T-SQL module needs to be rewritten using the ROLLUP operator instead. The code below demonstrates
how COMPUTE can be replaced with ROLLUP:

USE AdventureWorks
GO;

SELECT SalesOrderID, UnitPrice, UnitPriceDiscount


FROM Sales.SalesOrderDetail
ORDER BY SalesOrderID COMPUTE SUM(UnitPrice), SUM(UnitPriceDiscount) BY SalesOrderID GO;

SELECT SalesOrderID, UnitPrice, UnitPriceDiscount,SUM(UnitPrice) as UnitPrice ,


SUM(UnitPriceDiscount) as UnitPriceDiscount
FROM Sales.SalesOrderDetail
GROUP BY SalesOrderID, UnitPrice, UnitPriceDiscount WITH ROLLUP;

More information: Discontinued Database Engine functionality in SQL Server

CLR assemblies
Title: SQL CLR assemblies aren't suppor ted in Azure SQL Database
Categor y : Issue
Description
Azure SQL Database does not support SQL CLR assemblies.
Recommendation
Currently, there is no way to achieve this in Azure SQL Database. The recommended alternative solutions will
require application code and database changes to use only assemblies supported by Azure SQL Database.
Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine
More information: Unsupported Transact-SQL differences in SQL Database

Cryptographic provider
Title: A use of CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER was
found, which isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
Azure SQL Database does not support CRYPTOGRAPHIC PROVIDER statements because it cannot access files.
See the Impacted Objects section for the specific uses of CRYPTOGRAPHIC PROVIDER statements. Objects with
CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER will not work correctly after migrating to Azure
SQL Database.
Recommendation
Review objects with CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER . In any such objects that
are required, remove the uses of these features. Alternatively, migrate to SQL Server on Azure Virtual Machine

Cross database references


Title: Cross-database queries aren't suppor ted in Azure SQL Database
Categor y : Issue
Description
Databases on this server use cross-database queries, which aren't supported in Azure SQL Database.
Recommendation
Azure SQL Database does not support cross-database queries. The following actions are recommended:
Migrate the dependent database(s) to Azure SQL Database and use Elastic Database Query (Currently in
preview) functionality to query across Azure SQL databases.
Move the dependent datasets from other databases into the database that is being migrated.
Migrate to Azure SQL Managed Instance.
Migrate to SQL Server on Azure Virtual Machine.
More information: Check Azure SQL Database elastic database query (Preview)

Database compatibility
Title: Azure SQL Database doesn't suppor t compatibility levels below 100.
Categor y : Warning
Description
Database compatibility level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the
same pre-upgrade database compatibility level. Azure SQL Database doesn't support compatibility levels below
100.
Recommendation
Evaluate if the application functionality is intact when the database compatibility level is upgraded to 100 on
Azure SQL Managed Instance. Alternatively, migrate to SQL Server on Azure Virtual Machine

Database mail
Title: Database Mail isn't suppor ted in Azure SQL Database.
Categor y : Warning
Description
This server uses the Database Mail feature, which isn't supported in Azure SQL Database.
Recommendation
Consider migrating to Azure SQL Managed Instance that supports Database Mail. Alternatively, consider using
Azure functions and Sendgrid to accomplish mail functionality on Azure SQL Database.

Database principal alias


Title: SYS.DATABASE_PRINCIPAL_ALIASES is no longer suppor ted and has been removed.
Categor y : Issue
Description
SYS.DATABASE_PRINCIPAL_ALIASES is no longer supported and has been removed in Azure SQL Database.
Recommendation
Use roles instead of aliases.
More information: Discontinued Database Engine functionality in SQL Server
DISABLE_DEF_CNST_CHK option
Title: SET option DISABLE_DEF_CNST_CHK is discontinued and has been removed.
Categor y : Issue
Description
SET option DISABLE_DEF_CNST_CHK is discontinued and has been removed in Azure SQL Database.
More information: Discontinued Database Engine functionality in SQL Server

FASTFIRSTROW hint
Title: FASTFIRSTROW quer y hint is no longer suppor ted and has been removed.
Categor y : Warning
Description
FASTFIRSTROW query hint is no longer supported and has been removed in Azure SQL Database.
Recommendation
Instead of FASTFIRSTROW query hint use OPTION (FAST n).
More information: Discontinued Database Engine functionality in SQL Server

FileStream
Title: Filestream isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
The Filestream feature, which allows you to store unstructured data such as text documents, images, and videos
in NTFS file system, isn't supported in Azure SQL Database.
Recommendation
Upload the unstructured files to Azure Blob storage and store metadata related to these files (name, type, URL
location, storage key etc.) in Azure SQL Database. You may have to re-engineer your application to enable
streaming blobs to and from Azure SQL Database. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Streaming blobs to and from Azure SQL blog

Linked server
Title: Linked ser ver functionality isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
Linked servers enable the SQL Server Database Engine to execute commands against OLE DB data sources
outside of the instance of SQL Server.
Recommendation
Azure SQL Database does not support linked server functionality. The following actions are recommended to
eliminate the need for linked servers:
Identify the dependent datasets from remote SQL servers and consider moving these into the database
being migrated.
Migrate the dependent database(s) to Azure and use Elastic Database Query (preview) functionality to query
across databases in Azure SQL Database.
More information: Check Azure SQL Database elastic query (Preview)

MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) isn't supported in Azure SQL Database.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance

OPENROWSET (bulk)
Title: OpenRowSet used in bulk operation with non-Azure blob storage data source isn't suppor ted
in Azure SQL Database.
Categor y : Issue
Description OPENROWSET supports bulk operations through a built-in BULK provider that enables data from
a file to be read and returned as a rowset. OPENROWSET with non-Azure blob storage data source isn't
supported in Azure SQL Database.
Recommendation
Azure SQL Database cannot access file shares and Windows folders, so the files must be imported from Azure
blob storage. Therefore, only blob type DATASOURCE is supported in OPENROWSET function. Alternatively,
migrate to SQL Server on Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database

OPENROWSET (provider)
Title: OpenRowSet with SQL or non-SQL provider isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
OpenRowSet with SQL or non-SQL provider is an alternative to accessing tables in a linked server and is a one-
time, ad hoc method of connecting and accessing remote data by using OLE DB. OpenRowSet with SQL or non-
SQL provider isn't supported in Azure SQL Database.
Recommendation
Azure SQL Database supports OPENROWSET only to import from Azure blob storage. Alternatively, migrate to
SQL Server on Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database

Non-ANSI left outer join


Title: Non-ANSI style left outer join is no longer suppor ted and has been removed.
Categor y : Warning
Description
Non-ANSI style left outer join is no longer supported and has been removed in Azure SQL Database.
Recommendation
Use ANSI join syntax.
More information: Discontinued Database Engine functionality in SQL Server

Non-ANSI right outer join


Title: Non-ANSI style right outer join is no longer suppor ted and has been removed.
Categor y : Warning
Description
Non-ANSI style right outer join is no longer supported and has been removed in Azure SQL Database.
Recommendation
Use ANSI join syntax.
More information: Discontinued Database Engine functionality in SQL Server

Next column
Title: Tables and Columns named NEXT will lead to an error In Azure SQL Database.
Categor y : Issue
Description
Tables or columns named NEXT were detected. Sequences, introduced in Microsoft SQL Server, use the ANSI
standard NEXT VALUE FOR function. If a table or a column is named NEXT and the column is aliased as VALUE,
and if the ANSI standard AS is omitted, the resulting statement can cause an error.
Recommendation
Rewrite statements to include the ANSI standard AS keyword when aliasing a table or column. For example,
when a column is named NEXT and that column is aliased as VALUE, the query SELECT NEXT VALUE FROM TABLE
will cause an error and should be rewritten as SELECT NEXT AS VALUE FROM TABLE. Similarly, when a table is
named NEXT and that table is aliased as VALUE, the query SELECT Col1 FROM NEXT VALUE will cause an error and
should be rewritten as SELECT Col1 FROM NEXT AS VALUE .

RAISERROR
Title: Legacy style RAISERROR calls should be replaced with modern equivalents.
Categor y : Warning
Description
RAISERROR calls like the below example are termed as legacy-style because they do not include the commas
and the parenthesis. RAISERROR 50001 'this is a test' . This method of calling RAISERROR is no longer
supported and removed in Azure SQL Database.
Recommendation
Rewrite the statement using the current RAISERROR syntax, or evaluate if the modern approach of
BEGIN TRY { } END TRY BEGIN CATCH { THROW; } END CATCH is feasible.

More information: Discontinued Database Engine functionality in SQL Server

Server audits
Title: Use Azure SQL Database audit features to replace Ser ver Audits
Categor y : Warning
Description
Server Audits isn't supported in Azure SQL Database.
Recommendation
Consider Azure SQL Database audit features to replace Server Audits. Azure SQL supports audit and the
features are richer than SQL Server. Azure SQL Database can audit various database actions and events,
including: Access to data, Schema changes (DDL), Data changes (DML), Accounts, roles, and permissions (DCL,
Security exceptions. Azure SQL Database Auditing increases an organization's ability to gain deep insight into
events and changes that occur within their database, including updates and queries against the data.
Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
More information: Auditing for Azure SQL Database

Server credentials
Title: Ser ver scoped credential isn't suppor ted in Azure SQL Database
Categor y : Warning
Description
A credential is a record that contains the authentication information (credentials) required to connect to a
resource outside SQL Server. Azure SQL Database supports database credentials, but not the ones created at the
SQL Server scope.
Recommendation
Azure SQL Database supports database scoped credentials. Convert server scoped credentials to database
scoped credentials. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual
Machine
More information: Creating database scoped credential

Service Broker
Title: Ser vice Broker feature isn't suppor ted in Azure SQL Database
Categor y : Issue
Description
SQL Server Service Broker provides native support for messaging and queuing applications in the SQL Server
Database Engine. Service Broker feature isn't supported in Azure SQL Database.
Recommendation
Service Broker feature isn't supported in Azure SQL Database. Consider migrating to Azure SQL Managed
Instance that supports service broker within the same instance. Alternatively, migrate to SQL Server on Azure
Virtual Machine.

Server-scoped triggers
Title: Ser ver-scoped trigger isn't suppor ted in Azure SQL Database
Categor y : Warning
Description
A trigger is a special kind of stored procedure that executes in response to certain action on a table like
insertion, deletion, or updating of data. Server-scoped triggers aren't supported in Azure SQL Database. Azure
SQL Database does not support the following options for triggers: FOR LOGON, ENCRYPTION, WITH APPEND,
NOT FOR REPLICATION, EXTERNAL NAME option (there is no external method support), ALL SERVER Option
(DDL Trigger), Trigger on a LOGON event (Logon Trigger), Azure SQL Database does not support CLR-triggers.
Recommendation
Use database level trigger instead. Alternatively migrate to Azure SQL Managed Instance or SQL Server on
Azure Virtual Machine
More information: Resolving Transact-SQL differences during migration to SQL Database

SQL Agent jobs


Title: SQL Ser ver Agent jobs aren't available in Azure SQL Database
Categor y : Warning
Description
SQL Server Agent is a Microsoft Windows service that executes scheduled administrative tasks, which are called
jobs in SQL Server. SQL Server Agent jobs aren't available in Azure SQL Database.
Recommendation Use elastic jobs (preview), which are the replacement for SQL Server Agent jobs in Azure
SQL Database. Elastic Database jobs for Azure SQL Database allow you to reliably execute T-SQL scripts that
span multiple databases while automatically retrying and providing eventual completion guarantees.
Alternatively consider migrating to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines.
More information: Getting started with Elastic Database jobs (Preview)

SQL Database size


Title: Azure SQL Database does not suppor t database size greater than 100 TB.
Categor y : Issue
Description
The size of the database is greater than the maximum supported size of 100 TB.
Recommendation
Evaluate if the data can be archived or compressed or sharded into multiple databases. Alternatively, migrate to
SQL Server on Azure Virtual Machine.
More information: vCore resource limits

SQL Mail
Title: SQL Mail has been discontinued.
Categor y : Warning
Description
SQL Mail has been discontinued and removed in Azure SQL Database.
Recommendation
Consider migrating to Azure SQL Managed Instance or SQL Server on Azure Virtual Machines and use Database
Mail.
More information: Discontinued Database Engine functionality in SQL Server

SystemProcedures110
Title: Detected statements that reference removed system stored procedures that aren't available
in Azure SQL Database.
Categor y : Warning
Description
Following unsupported system and extended stored procedures cannot be used in Azure SQL Database -
sp_dboption , sp_addserver , sp_dropalias , sp_activedirectory_obj , sp_activedirectory_scp ,
sp_activedirectory_start .

Recommendation
Remove references to unsupported system procedures that have been removed in Azure SQL Database.
More information: Discontinued Database Engine functionality in SQL Server

Trace flags
Title: Azure SQL Database does not suppor t trace flags
Categor y : Warning
Description
Trace flags are used to temporarily set specific server characteristics or to switch off a particular behavior. Trace
flags are frequently used to diagnose performance issues or to debug stored procedures or complex computer
systems. Azure SQL Database does not support trace flags.
Recommendation
Review impacted objects section in Azure Migrate to see all trace flags that aren't supported in Azure SQL
Database and evaluate if they can be removed. Alternatively, migrate to Azure SQL Managed Instance which
supports limited number of global trace flags or SQL Server on Azure Virtual Machine.
More information: Resolving Transact-SQL differences during migration to SQL Database

Windows authentication
Title: Database users mapped with Windows authentication (integrated security) aren't suppor ted
in Azure SQL Database.
Categor y : Warning
Description
Azure SQL Database supports two types of authentication
SQL Authentication: uses a username and password
Azure Active Directory Authentication: uses identities managed by Azure Active Directory and is supported
for managed and integrated domains.
Database users mapped with Windows authentication (integrated security) aren't supported in Azure SQL
Database.
Recommendation
Federate the local Active Directory with Azure Active Directory. The Windows identity can then be replaced with
the equivalent Azure Active Directory identities. Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: SQL Database security capabilities

XP_cmdshell
Title: xp_cmdshell isn't suppor ted in Azure SQL Database.
Categor y : Issue
Description
xp_cmdshell which spawns a Windows command shell and passes in a string for execution isn't supported in
Azure SQL Database.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using xp_cmdshell and evaluate if the
reference to xp_cmdshell or the impacted object can be removed. Also consider exploring Azure Automation
that delivers cloud-based automation and configuration service. Alternatively, migrate to SQL Server on Azure
Virtual Machine.

Next steps
To start migrating your SQL Server to Azure SQL Database, see the SQL Server to SQL Database migration
guide.
For a matrix of available Microsoft and third-party services and tools to assist you with various database
and data migration scenarios as well as specialty tasks, see Service and tools for data migration.
To learn more about SQL Database, see:
Overview of Azure SQL Database
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Configure and manage content reference - Azure
SQL Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this article you can find a content reference of various guides, scripts, and explanations that can help you to
manage and configure your Azure SQL Database.

Load data
Migrate to SQL Database
Learn how to manage SQL Database after migration.
Copy a database
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with ADF

Configure features
Configure Azure Active Directory (Azure AD) auth
Configure Conditional Access
Multi-factor Azure AD auth
Configure Multi-Factor Authentication
Configure backup retention for a database to keep your backups on Azure Blob Storage.
Configure geo-replication to keep a replica of your database in another region.
Configure auto-failover group to automatically failover a group of single or pooled databases to a secondary
server in another region in the event of a disaster.
Configure temporal retention policy
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove TDE protector
Configure In-Memory OLTP
Configure Azure Automation
Configure transactional replication to replicate your date between databases.
Configure threat detection to let Azure SQL Database identify suspicious activities such as SQL Injection or
access from suspicious locations.
Configure dynamic data masking to protect your sensitive data.
Configure security for geo-replicas.

Monitor and tune your database


Manual tuning
Use DMVs to monitor performance
Use Query store to monitor performance
Enable automatic tuning to let Azure SQL Database optimize performance of your workload.
Enable e-mail notifications for automatic tuning to get information about tuning recommendations.
Apply performance recommendations and optimize your database.
Create alerts to get notifications from Azure SQL Database.
Troubleshoot connectivity if you notice some connectivity issues between the applications and the database.
You can also use Resource Health for connectivity issues.
Troubleshoot performance with Intelligent Insights
Manage file space to monitor storage usage in your database.
Use Intelligent Insights diagnostics log
Monitor In-memory OLTP space
Extended events
Extended events
Store Extended events into event file
Store Extended events into ring buffer

Query distributed data


Query vertically partitioned data across multiple databases.
Report across scaled-out data tier.
Query across tables with different schemas.
Data sync
SQL Data Sync
Data Sync Agent
Replicate schema changes
Monitor with OMS
Best practices for Data Sync
Troubleshoot Data Sync

Elastic Database jobs


Create and manage Elastic Database Jobs using PowerShell.
Create and manage Elastic Database Jobs using Transact-SQL.
Migrate from old Elastic job.

Database sharding
Upgrade elastic database client library.
Create sharded app.
Query horizontally sharded data.
Run Multi-shard queries.
Move sharded data.
Configure security in database shards.
Add a shard to the current set of database shards.
Fix shard map problems.
Migrate sharded DB.
Create counters.
Use entity framework to query sharded data.
Use Dapper framework to query sharded data.

Develop applications
Connectivity
Use Spark Connector
Authenticate app
Use batching for better performance
Connectivity guidance
DNS aliases
Setup DNS alias PowerShell
Ports - ADO.NET
C and C ++
Excel

Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades
Design Multi-tenant software as a service (SaaS ) applications
SaaS design patterns
SaaS video indexer
SaaS app security

Next steps
Learn more about How-to guides for Azure SQL Managed Instance
T-SQL differences between SQL Server and Azure
SQL Database
7/12/2022 • 5 minutes to read • Edit Online

When migrating your database from SQL Server to Azure SQL Database, you may discover that your SQL
Server databases require some re-engineering before they can be migrated. This article provides guidance to
assist you in both performing this re-engineering and understanding the underlying reasons why the re-
engineering is necessary. To detect incompatibilities and migrate databases to Azure SQL Database, use Data
Migration Assistant (DMA).

Overview
Most T-SQL features that applications use are fully supported in both Microsoft SQL Server and Azure SQL
Database. For example, the core SQL components such as data types, operators, string, arithmetic, logical, and
cursor functions work identically in SQL Server and SQL Database. There are, however, a few T-SQL differences
in DDL (data definition language) and DML (data manipulation language) elements resulting in T-SQL
statements and queries that are only partially supported (which we discuss later in this article).
In addition, there are some features and syntax that isn't supported at all because Azure SQL Database is
designed to isolate features from dependencies on the system databases and the operating system. As such,
most instance-level features are not supported in SQL Database. T-SQL statements and options aren't available
if they configure instance-level options, operating system components, or specify file system configuration.
When such capabilities are required, an appropriate alternative is often available in some other way from SQL
Database or from another Azure feature or service.
For example, high availability is built into Azure SQL Database. T-SQL statements related to availability groups
are not supported by SQL Database, and the dynamic management views related to Always On Availability
Groups are also not supported.
For a list of the features that are supported and unsupported by SQL Database, see Azure SQL Database feature
comparison. This page supplements that article, and focuses on T-SQL statements.

T-SQL syntax statements with partial differences


The core DDL statements are available, but DDL statement extensions related to unsupported features, such as
file placement on disk, are not supported.
In SQL Server, CREATE DATABASE and ALTER DATABASE statements have over three dozen options. The
statements include file placement, FILESTREAM, and service broker options that only apply to SQL Server.
This may not matter if you create databases in SQL Database before you migrate, but if you're migrating T-
SQL code that creates databases you should compare CREATE DATABASE (Azure SQL Database) with the SQL
Server syntax at CREATE DATABASE (SQL Server T-SQL) to make sure all the options you use are supported.
CREATE DATABASE for Azure SQL Database also has service objective and elastic pool options that apply only
to SQL Database.
The CREATE TABLE and ALTER TABLE statements have FILETABLE and FILESTREAM options that can't be used
on SQL Database because these features aren't supported.
CREATE LOGIN and ALTER LOGIN statements are supported, but do not offer all options available in SQL
Server. To make your database more portable, SQL Database encourages using contained database users
instead of logins whenever possible. For more information, see CREATE LOGIN and ALTER LOGIN and
Manage logins and users.

T-SQL syntax not supported in Azure SQL Database


In addition to T-SQL statements related to the unsupported features described in Azure SQL Database feature
comparison, the following statements and groups of statements aren't supported. As such, if your database to
be migrated is using any of the following features, re-engineer your application to eliminate these T-SQL
features and statements.
Collation of system objects.
Connection related: Endpoint statements. SQL Database doesn't support Windows authentication, but does
support Azure Active Directory authentication. This includes authentication of Active Directory principals
federated with Azure Active Directory. For more information, see Connecting to SQL Database or Azure
Azure Synapse Analytics By Using Azure Active Directory Authentication.
Cross-database and cross-instance queries using three or four part names. Three part names referencing the
tempdb database and the current database are supported. Elastic query supports read-only references to
tables in other MSSQL databases.
Cross database ownership chaining and the TRUSTWORTHY database property.
EXECUTE AS LOGIN . Use EXECUTE AS USER instead.
Extensible key management (EKM) for encryption keys. Transparent Data Encryption (TDE) customer-
managed keys and Always Encrypted column master keys may be stored in Azure Key Vault.
Eventing: event notifications, query notifications.
File properties: Syntax related to database file name, placement, size, and other file properties automatically
managed by SQL Database.
High availability: Syntax related to high availability and database recovery, which are managed by SQL
Database. This includes syntax for backup, restore, Always On, database mirroring, log shipping, recovery
models.
Syntax related to snapshot, transactional, and merge replication, which is not available in SQL Database.
Replication subscriptions are supported.
Functions: fn_get_sql , fn_virtualfilestats , fn_virtualservernodes .
Instance configuration: Syntax related to server memory, worker threads, CPU affinity, trace flags. Use service
tiers and compute sizes instead.
KILL STATS JOB .
OPENQUERY , OPENDATASOURCE , and four-part names.
.NET Framework: CLR integration
Semantic search
Server credentials: Use database scoped credentials instead.
Server-level permissions: GRANT , REVOKE , and DENY of server level permissions are not supported. Some
server-level permissions are replaced by database-level permissions, or granted implicitly by built-in server
roles. Some server-level DMVs and catalog views have similar database-level views.
SET REMOTE_PROC_TRANSACTIONS
SHUTDOWN
sp_addmessage
sp_configure and RECONFIGURE . ALTER DATABASE SCOPED CONFIGURATION is supported.
sp_helpuser
sp_migrate_user_to_contained
SQL Server Agent: Syntax that relies upon the SQL Server Agent or the MSDB database: alerts, operators,
central management servers. Use scripting, such as PowerShell, instead.
SQL Server audit: Use SQL Database auditing instead.
SQL Server trace.
Trace flags.
T-SQL debugging.
Server-scoped or logon triggers.
USE statement: To change database context to a different database, you must create a new connection to that
database.

Full T-SQL reference


For more information about T-SQL grammar, usage, and examples, see T-SQL Reference (Database Engine).
About the "Applies to" tags
The T-SQL reference includes articles related to all recent SQL Server versions. Below the article title there's an
icon bar, listing MSSQL platforms, and indicating applicability. For example, availability groups were introduced
in SQL Server 2012. The CREATE AVAILABILITY GROUP article indicates that the statement applies to SQL
Ser ver (star ting with 2012) . The statement doesn't apply to SQL Server 2008, SQL Server 2008 R2, Azure
SQL Database, Azure Azure Synapse Analytics, or Parallel Data Warehouse.
In some cases, the general subject of an article can be used in a product, but there are minor differences
between products. The differences are indicated at midpoints in the article as appropriate. For example, the
CREATE TRIGGER article is available in SQL Database. But the ALL SERVER option for server-level triggers,
indicates that server-level triggers can't be used in SQL Database. Use database-level triggers instead.

Next steps
For a list of the features that are supported and unsupported by SQL Database, see Azure SQL Database feature
comparison.
To detect compatibility issues in your SQL Server databases before migrating to Azure SQL Database, and to
migrate your databases, use Data Migration Assistant (DMA).
Plan and manage costs for Azure SQL Database
7/12/2022 • 7 minutes to read • Edit Online

This article describes how you plan for and manage costs for Azure SQL Database.
First, you use the Azure pricing calculator to add Azure resources, and review the estimated costs. After you've
started using Azure SQL Database resources, use Cost Management features to set budgets and monitor costs.
You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
Costs for Azure SQL Database are only a portion of the monthly costs in your Azure bill. Although this article
explains how to plan for and manage costs for Azure SQL Database, you're billed for all Azure services and
resources used in your Azure subscription, including any third-party services.

Prerequisites
Cost analysis supports most Azure account types, but not all of them. To view the full list of supported account
types, see Understand Cost Management data. To view cost data, you need at least read access for an Azure
account.
For information about assigning access to Azure Cost Management data, see Assign access to data.

SQL Database initial cost considerations


When working with Azure SQL Database, there are several cost-saving features to consider:
vCore or DTU purchasing models
Azure SQL Database supports two purchasing models: vCore and DTU. The way you get charged varies between
the purchasing models so it's important to understand the model that works best for your workload when
planning and considering costs. For information about vCore and DTU purchasing models, see Choose between
the vCore and DTU purchasing models.
Provisioned or serverless
In the vCore purchasing model, Azure SQL Database also supports two types of compute tiers: provisioned
throughput and serverless. The way you get charged for each compute tier varies so it's important to
understand what works best for your workload when planning and considering costs. For details, see vCore
model overview - compute tiers.
In the provisioned compute tier of the vCore-based purchasing model, you can exchange your existing licenses
for discounted rates. For details, see Azure Hybrid Benefit (AHB).
Elastic pools
For environments with multiple databases that have varying and unpredictable usage demands, elastic pools
can provide cost savings compared to provisioning the same number of single databases. For details, see Elastic
pools.

Estimate Azure SQL Database costs


Use the Azure pricing calculator to estimate costs for different Azure SQL Database configurations. For more
information, see Azure SQL Database pricing.
The information and pricing in the following image are for example purposes only:
You can also estimate how different Retention Policy options affect cost. The information and pricing in the
following image are for example purposes only:

Understand the full billing model for Azure SQL Database


Azure SQL Database runs on Azure infrastructure that accrues costs along with Azure SQL Database when you
deploy the new resource. It's important to understand that additional infrastructure might accrue cost.
Azure SQL Database (except for serverless) is billed on a predictable, hourly rate. If the SQL database is active
for less than one hour, you are billed for the highest service tier selected, provisioned storage, and IO that
applied during that hour, regardless of usage or whether the database was active for less than an hour.
Billing depends on the SKU of your product, the generation hardware of your SKU, and the meter category.
Azure SQL Database has the following possible SKUs:
Basic (B)
Standard (S)
Premium (P)
General purpose (GP)
Business Critical (BC)
And for storage: geo-redundant storage (GRS), locally redundant storage (LRS), and zone-redundant storage
(ZRS)
It's also possible to have a deprecated SKU from deprecated resource offerings
For more information, see vCore-based purchasing model, DTU-based purchasing model, or compare
purchasing models.
The following table shows the most common billing meters and their possible SKUs for single databases :

M EA SUREM EN T P O SSIB L E SK U( S) DESC RIP T IO N

Backup* GP/BC/HS Measures the consumption of storage


used by backups, billed by the amount
of storage utilized in GB per month.

Backup (LTR) GRS/LRS/ZRS/GF Measures the consumption of storage


used by long-term backups configured
via long-term retention, billed by the
amount of storage utilized.

Compute B/S/P/GP/BC Measures the consumption of your


compute resources per hour.

Compute (primary/named replica) HS Measures the consumption of your


compute resources per hour of your
primary HS replica.

Compute (HA replica) HS Measures the consumption of your


compute resources per hour of your
secondary HS replica.

Compute (ZR add-on) GP Measures the consumption of your


compute resources per minute of your
zone redundant added-on replica.

Compute (serverless) GP Measures the consumption of your


serverless compute resources per
minute.

License GP/BC/HS The billing for your SQL Server license


accrued per month.
M EA SUREM EN T P O SSIB L E SK U( S) DESC RIP T IO N

Storage B/S*/P*/G/BC/HS Billed monthly, by the amount of data


stored per hour.

* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost.
The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard
and premium tiers. For more information, see Azure SQL Database pricing.
The following table shows the most common billing meters and their possible SKUs for elastic pools :

M EA SUREM EN T P O SSIB L E SK U( S) DESC RIP T IO N

Backup* GP/BC Measures the consumption of storage


used by backups, billed per GB per
hour on a monthly basis.

Compute B/S/P/GP/BC Measures the consumption of your


compute resources per hour, such as
vCores and memory or DTUs.

License GP/BC The billing for your SQL Server license


accrued per month.

Storage B/S*/P*/GP/HS Billed monthly, both by the amount of


data stored on the drive using storage
space per hour, and the throughput of
megabytes per second (MBPS).

* In the DTU purchasing model, an initial set of storage for data and backups is provided at no additional cost.
The size of the storage depends on the service tier selected. Extra data storage can be purchased in the standard
and premium tiers. For more information, see Azure SQL Database pricing.
Using Monetary Credit with Azure SQL Database
You can pay for Azure SQL Database charges with your Azure Prepayment (previously called monetary
commitment) credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products
and services including those from the Azure Marketplace.

Review estimated costs in the Azure portal


As you go through the process of creating an Azure SQL Database, you can see the estimated costs during
configuration of the compute tier.
To access this screen, select Configure database on the Basics tab of the Create SQL Database page. The
information and pricing in the following image are for example purposes only:
If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As
you create and use Azure resources, your credits are used. When you reach your credit limit, the resources that
you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can
remove it. For more information about spending limits, see Azure spending limit.

Monitor costs
As you start using Azure SQL Database, you can see the estimated costs in the portal. Use the following steps to
review the cost estimate:
1. Sign into the Azure portal and navigate to the resource group for your Azure SQL database. You can
locate the resource group by navigating to your database and select Resource group in the Over view
section.
2. In the menu, select Cost analysis .
3. View Accumulated costs and set the chart at the bottom to Ser vice name . This chart shows an
estimate of your current SQL Database costs. To narrow costs for the entire page to Azure SQL Database,
select Add filter and then, select Azure SQL Database . The information and pricing in the following
image are for example purposes only:

From here, you can explore costs on your own. For more and information about the different cost analysis
settings, see Start analyzing costs.
Create budgets
You can create budgets to manage costs and create alerts that automatically notify stakeholders of spending
anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds.
Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an
overall cost monitoring strategy.
Budgets can be created with filters for specific resources or services in Azure if you want more granularity
present in your monitoring. Filters help ensure that you don't accidentally create new resources. For more about
the filter options when you create a budget, see Group and filter options.

Export cost data


You can also export your cost data to a storage account. This is helpful when you need to do further data
analysis on cost. For example, a finance team can analyze the data using Excel or Power BI. You can export your
costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the
recommended way to retrieve cost datasets.

Other ways to manage and reduce costs for Azure SQL Database
Azure SQL Database also enables you to scale resources up or down to control costs based on your application
needs. For details, see Dynamically scale database resources.
Save money by committing to a reservation for compute resources for one to three years. For details, see Save
costs for resources with reserved capacity.

Next steps
Learn how to optimize your cloud investment with Azure Cost Management.
Learn more about managing costs with cost analysis.
Learn about how to prevent unexpected costs.
Take the Cost Management guided learning course.
Azure SQL Database and Azure SQL Managed
Instance connect and query articles
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


The following document includes links to Azure examples showing how to connect and query Azure SQL
Database and Azure SQL Managed Instance. For some related recommendations for Transport Level Security,
see TLS considerations for database connectivity.

Quickstarts
Q UIC K STA RT DESC RIP T IO N

SQL Server Management Studio This quickstart demonstrates how to use SSMS to connect
to a database, and then use Transact-SQL statements to
query, insert, update, and delete data in the database.

Azure Data Studio This quickstart demonstrates how to use Azure Data Studio
to connect to a database, and then use Transact-SQL (T-SQL)
statements to create the TutorialDB used in Azure Data
Studio tutorials.

Azure portal This quickstart demonstrates how to use the Query editor
to connect to a database (Azure SQL Database only), and
then use Transact-SQL statements to query, insert, update,
and delete data in the database.

Visual Studio Code This quickstart demonstrates how to use Visual Studio Code
to connect to a database, and then use Transact-SQL
statements to query, insert, update, and delete data in the
database.

.NET with Visual Studio This quickstart demonstrates how to use the .NET
framework to create a C# program with Visual Studio to
connect to a database and use Transact-SQL statements to
query data.

.NET core This quickstart demonstrates how to use .NET Core on


Windows/Linux/macOS to create a C# program to connect
to a database and use Transact-SQL statements to query
data.

Go This quickstart demonstrates how to use Go to connect to a


database. Transact-SQL statements to query and modify
data are also demonstrated.

Java This quickstart demonstrates how to use Java to connect to


a database and then use Transact-SQL statements to query
data.
Q UIC K STA RT DESC RIP T IO N

Node.js This quickstart demonstrates how to use Node.js to create a


program to connect to a database and use Transact-SQL
statements to query data.

PHP This quickstart demonstrates how to use PHP to create a


program to connect to a database and use Transact-SQL
statements to query data.

Python This quickstart demonstrates how to use Python to connect


to a database and use Transact-SQL statements to query
data.

Ruby This quickstart demonstrates how to use Ruby to create a


program to connect to a database and use Transact-SQL
statements to query data.

Get server connection information


Get the connection information you need to connect to the database in Azure SQL Database. You'll need the fully
qualified server name or host name, database name, and login information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Navigate to the SQL Databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for the database in
Azure SQL Database or the fully qualified server name (or IP address) next to Host for an Azure SQL
Managed Instance or SQL Server on Azure VM. To copy the server name or host name, hover over it and
select the Copy icon.

NOTE
For connection information for SQL Server on Azure VM, see Connect to a SQL Server instance.

Get ADO.NET connection information (optional - SQL Database only)


1. Navigate to the database blade in the Azure portal and, under Settings , select Connection strings .
2. Review the complete ADO.NET connection string.
3. Copy the ADO.NET connection string if you intend to use it.

TLS considerations for database connectivity


Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to
databases in Azure SQL Database or Azure SQL Managed Instance. No special configuration is necessary. For all
connections to a SQL Server instance, a database in Azure SQL Database, or an instance of Azure SQL Managed
Instance, we recommend that all applications set the following configurations, or their equivalents:
Encr ypt = On
TrustSer verCer tificate = Off
Some systems use different yet equivalent keywords for those configuration keywords. These configurations
ensure that the client driver verifies the identity of the TLS certificate received from the server.
We also recommend that you disable TLS 1.1 and 1.0 on the client if you need to comply with Payment Card
Industry - Data Security Standard (PCI-DSS).
Non-Microsoft drivers might not use TLS by default. This can be a factor when connecting to Azure SQL
Database or Azure SQL Managed Instance. Applications with embedded drivers might not allow you to control
these connection settings. We recommend that you examine the security of such drivers and applications before
using them on systems that interact with sensitive data.

Drivers
The following minimal versions of the tools and drivers are recommended if you want to connect to Azure SQL
database:

DRIVER/ TO O L VERSIO N

.NET Framework 4.6.1 (or .NET Core)

ODBC driver v17

PHP driver 5.2.0

JDBC driver 6.4.0

Node.js driver 2.1.1


DRIVER/ TO O L VERSIO N

OLEDB driver 18.0.2.0

SMO 150 or higher

Libraries
You can use various libraries and frameworks to connect to Azure SQL Database or Azure SQL Managed
Instance. Check out our Get started tutorials to quickly get started with programming languages such as C#,
Java, Node.js, PHP, and Python. Then build an app by using SQL Server on Linux or Windows or Docker on
macOS.
The following table lists connectivity libraries or drivers that client applications can use from a variety of
languages to connect to and use SQL Server running on-premises or in the cloud. You can use them on Linux,
Windows, or Docker and use them to connect to Azure SQL Database, Azure SQL Managed Instance, and Azure
Synapse Analytics.

A DDIT IO N A L
L A N GUA GE P L AT F O RM RESO URC ES DO W N LO A D GET STA RT ED

C# Windows, Linux, Microsoft ADO.NET Download Get started


macOS for SQL Server

Java Windows, Linux, Microsoft JDBC Download Get started


macOS driver for SQL Server

PHP Windows, Linux, PHP SQL driver for Download Get started
macOS SQL Server

Node.js Windows, Linux, Node.js driver for Install Get started


macOS SQL Server

Python Windows, Linux, Python SQL driver Install choices: Get started
macOS * pymssql
* pyodbc

Ruby Windows, Linux, Ruby driver for SQL Install Get started
macOS Server

C++ Windows, Linux, Microsoft ODBC Download


macOS driver for SQL Server

Data-access frameworks
The following table lists examples of object-relational mapping (ORM) frameworks and web frameworks that
client applications can use with SQL Server, Azure SQL Database, Azure SQL Managed Instance, or Azure
Synapse Analytics. You can use the frameworks on Linux, Windows, or Docker.

L A N GUA GE P L AT F O RM O RM ( S)

C# Windows, Linux, macOS Entity Framework


Entity Framework Core

Java Windows, Linux, macOS Hibernate ORM


L A N GUA GE P L AT F O RM O RM ( S)

PHP Windows, Linux, macOS Laravel (Eloquent)


Doctrine

Node.js Windows, Linux, macOS Sequelize ORM

Python Windows, Linux, macOS Django

Ruby Windows, Linux, macOS Ruby on Rails

Next steps
For connectivity architecture information, see Azure SQL Database Connectivity Architecture.
Find SQL Server drivers that are used to connect from client applications.
Connect to Azure SQL Database or Azure SQL Managed Instance:
Connect and query using .NET (C#)
Connect and query using PHP
Connect and query using Node.js
Connect and query using Java
Connect and query using Python
Connect and query using Ruby
Install sqlcmd and bcp the SQL Server command-line tools on Linux - For Linux users, try connecting
to Azure SQL Database or Azure SQL Managed Instance using sqlcmd.
Retry logic code examples:
Connect resiliently with ADO.NET
Connect resiliently with PHP
Quickstart: Use SSMS to connect to and query
Azure SQL Database or Azure SQL Managed
Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


In this quickstart, you'll learn how to use SQL Server Management Studio (SSMS) to connect to Azure SQL
Database or Azure SQL Managed Instance and run some queries.

Prerequisites
Completing this quickstart requires the following items:
SQL Server Management Studio (SSMS).
A database in Azure SQL Database. You can use one of these quickstarts to create and then configure a
database in Azure SQL Database:

SQ L SERVER O N A Z URE
A C T IO N SQ L DATA B A SE SQ L M A N A GED IN STA N C E VM

Create Portal Portal Portal

CLI CLI

PowerShell PowerShell PowerShell

Configure Server-level IP firewall rule Connectivity from a VM

Connectivity from on-site Connect to SQL Server

Load data Adventure Works loaded Restore Wide World Restore Wide World
per quickstart Importers Importers

Restore or import Restore or import


Adventure Works from Adventure Works from
BACPAC file from GitHub BACPAC file from GitHub

IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a managed instance, you must
either import the Adventure Works database into an instance database or modify the scripts in this article to use
the Wide World Importers database.

If you simply want to run some ad-hoc queries without installing SSMS, see Quickstart: Use the Azure portal's
query editor to query a database in Azure SQL Database.

Get server connection information


Get the connection information you need to connect to your database. You'll need the fully qualified server
name or host name, database name, and login information to complete this quickstart.
1. Sign in to the Azure portal.
2. Navigate to the database or managed instance you want to query.
3. On the Over view page, review the fully qualified server name next to Ser ver name for your database
in SQL Database or the fully qualified server name (or IP address) next to Host for your managed
instance in SQL Managed Instance or your SQL Server instance on your VM. To copy the server name or
host name, hover over it and select the Copy icon.

NOTE
For connection information for SQL Server on Azure VM, see Connect to SQL Server

Connect to your database


NOTE
In December 2021, releases of SSMS prior to 18.6 will no longer authenticate through Azure Active Directory with MFA.
To continue utilizing Azure Active Directory authentication with MFA, you need SSMS 18.6 or later.

In SSMS, connect to your server.

IMPORTANT
A server listens on port 1433. To connect to a server from behind a corporate firewall, the firewall must have this port
open.

1. Open SSMS.
2. The Connect to Ser ver dialog box appears. Enter the following information:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser ver type Database engine Required value.

Ser ver name The fully qualified server name Something like:
ser vername.database.windows.
net .

Authentication SQL Server Authentication This tutorial uses SQL


Authentication.

Login Server admin account user ID The user ID from the server admin
account used to create the server.

Password Server admin account password The password from the server
admin account used to create the
server.
NOTE
This tutorial utilizes SQL Server Authentication.

3. Select Options in the Connect to Ser ver dialog box. In the Connect to database drop-down menu,
select mySampleDatabase . Completing the quickstart in the Prerequisites section creates an
AdventureWorksLT database named mySampleDatabase. If your working copy of the AdventureWorks
database has a different name than mySampleDatabase, then select it instead.

4. Select Connect . The Object Explorer window opens.


5. To view the database's objects, expand Databases and then expand your database node.
Query data
Run this SELECT Transact-SQL code to query for the top 20 products by category.
1. In Object Explorer, right-click mySampleDatabase and select New Quer y . A new query window
connected to your database opens.
2. In the query window, paste the following SQL query:

SELECT pc.Name as CategoryName, p.name as ProductName


FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid;

3. On the toolbar, select Execute to run the query and retrieve data from the Product and ProductCategory
tables.
Insert data
Run this INSERT Transact-SQL code to create a new product in the SalesLT.Product table.
1. Replace the previous query with this one.

INSERT INTO [SalesLT].[Product]


( [Name]
, [ProductNumber]
, [Color]
, [ProductCategoryID]
, [StandardCost]
, [ListPrice]
, [SellStartDate] )
VALUES
('myNewProduct'
,123456789
,'NewColor'
,1
,100
,100
,GETDATE() );

2. Select Execute to insert a new row in the Product table. The Messages pane displays (1 row
affected) .
View the result
1. Replace the previous query with this one.

SELECT * FROM [SalesLT].[Product]


WHERE Name='myNewProduct'

2. Select Execute . The following result appears.


Update data
Run this UPDATE Transact-SQL code to modify your new product.
1. Replace the previous query with this one that returns the new record created previously:

UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';

2. Select Execute to update the specified row in the Product table. The Messages pane displays (1 row
affected) .
Delete data
Run this DELETE Transact-SQL code to remove your new product.
1. Replace the previous query with this one.

DELETE FROM [SalesLT].[Product]


WHERE Name = 'myNewProduct';

2. Select Execute to delete the specified row in the Product table. The Messages pane displays (1 row
affected) .

Next steps
For information about SSMS, see SQL Server Management Studio.
To connect and query using the Azure portal, see Connect and query with the Azure portal SQL Query editor.
To connect and query using Visual Studio Code, see Connect and query with Visual Studio Code.
To connect and query using .NET, see Connect and query with .NET.
To connect and query using PHP, see Connect and query with PHP.
To connect and query using Node.js, see Connect and query with Node.js.
To connect and query using Java, see Connect and query with Java.
To connect and query using Python, see Connect and query with Python.
To connect and query using Ruby, see Connect and query with Ruby.
Quickstart: Use the Azure portal query editor
(preview) to query Azure SQL Database
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Query editor (preview) is a tool to run SQL queries against Azure SQL Database in the Azure portal. In this
quickstart, you connect to an Azure SQL database in the portal and use query editor to run Transact-SQL (T-
SQL) queries.

Prerequisites
The AdventureWorksLT sample Azure SQL database. If you don't have it, you can create a database in
Azure SQL Database that has the AdventureWorks sample data.
A user account with permissions to connect to the database and query editor. You can either:
Have or set up a user that can connect to the database with SQL authentication.
Set up an Azure Active Directory (Azure AD) administrator for the database's SQL server.
An Azure AD server administrator can use a single identity to sign in to the Azure portal and the
SQL server and databases. To set up an Azure AD server admin:
1. In the Azure portal, on your Azure SQL database Over view page, select Ser ver name
under Essentials to navigate to the server for your database.
2. On the server page, select Azure Active Director y in the Settings section of the left
menu.
3. On the Azure Active Director y page toolbar, select Set admin .

4. On the Azure Active Director y form, search for and select the user or group you want to
be the admin, and then select Select .
5. On the Azure Active Director y main page, select Save .
NOTE
Email addresses like outlook.com or gmail.com aren't supported as Azure AD admins. The user must
either be created natively in the Azure AD or federated into the Azure AD.
Azure AD admin sign-in works with accounts that have two-factor authentication enabled, but the
query editor doesn't support two-factor authentication.

Connect to the query editor


1. On your SQL database Over view page in the Azure portal, select Quer y editor (preview) from the left
menu.

2. On the sign-in screen, provide credentials to connect to the database. You can connect using SQL
authentication or Azure AD.
To connect with SQL authentication, under SQL ser ver authentication , enter a Login and
Password for a user that has access to the database, and then select OK . You can always use the
login and password for the server admin.
To connect using Azure AD, if you're the Azure AD server admin, select Continue as <your user
or group ID> . If sign-in is unsuccessful, try refreshing the page.

Query the database


On the Quer y editor (preview) page, run the following example queries against your AdventureWorksLT
sample database.
Run a SELECT query
1. To query for the top 20 products in the database, paste the following SELECT query into the query editor:

SELECT TOP 20 pc.Name as CategoryName, p.name as ProductName


FROM SalesLT.ProductCategory pc
JOIN SalesLT.Product p
ON pc.productcategoryid = p.productcategoryid;

2. Select Run , and then review the output in the Results pane.
3. Optionally, you can select Save quer y to save the query as an .sql file, or select Expor t data as to
export the results as a .json, .csv, or .xml file.
Run an INSERT query
To add a new product to the SalesLT.Product table, run the following INSERT T-SQL statement.
1. In the query editor, replace the previous query with the following query:

INSERT INTO [SalesLT].[Product]


( [Name]
, [ProductNumber]
, [Color]
, [ProductCategoryID]
, [StandardCost]
, [ListPrice]
, [SellStartDate]
)
VALUES
('myNewProduct'
,123456789
,'NewColor'
,1
,100
,100
,GETDATE() );

2. Select Run to add the new product. After the query runs, the Messages pane displays Quer y
succeeded: Affected rows: 1 .
Run an UPDATE query
Run the following UPDATE T-SQL statement to update the price of your new product.
1. In the query editor, replace the previous query with the following query:

UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';

2. Select Run to update the specified row in the Product table. The Messages pane displays Quer y
succeeded: Affected rows: 1 .
Run a DELETE query
Run the following DELETE T-SQL statement to remove your new product.
1. In the query editor, replace the previous query with the following query:

DELETE FROM [SalesLT].[Product]


WHERE Name = 'myNewProduct';

2. Select Run to delete the specified row in the Product table. The Messages pane displays Quer y
succeeded: Affected rows: 1 .

Considerations and limitations


The following considerations and limitations apply when connecting to and querying Azure SQL Database with
the query editor.
Query editor limitations
The query editor doesn't support connecting to the master database. To connect to the master database,
use SQL Server Management Studio (SSMS), Visual Studio Code, or Azure Data Studio.
The query editor can't connect to a replica database with ApplicationIntent=ReadOnly . To connect in this way
from a rich client, use SSMS and specify ApplicationIntent=ReadOnly on the Additional Connection
Parameters tab in connection options. For more information, see Connect to a read-only replica.
The query editor has a 5-minute timeout for query execution. To run longer queries, use SSMS, Visual Studio
Code, or Azure Data Studio.
The query editor only supports cylindrical projection for geography data types.
The query editor doesn't support IntelliSense for database tables and views, but supports autocomplete for
names that have already been typed. For IntelliSense support, use SSMS, Visual Studio Code, or Azure Data
Studio.
Pressing F5 refreshes the query editor page, and any query currently in the editor isn't saved.
Connection considerations
For public connections to the query editor, you need to add your outbound IP address to the server's
allowed firewall rules to access your databases.
You don't need to add your IP address to the SQL server firewall rules if you have a Private Link
connection set up on the server, and you connect to the server from within the private virtual network.
Users need at least the role-based access control (RBAC) permission Read access to the ser ver and
database to use the query editor. Anyone with this level of access can access the query editor. Users who
can't assign themselves as the Azure AD admin or access a SQL administrator account shouldn't access
the query editor.
Connection error troubleshooting
If you see the error message The X-CSRF-Signature header could not be validated , take the
following actions to resolve the issue:
Verify that your computer's clock is set to the right time and time zone. You can try to match your
computer's time zone with Azure by searching for the time zone for your database location, such as
East US.
If you're on a proxy network, make sure that the request header X-CSRF-Signature isn't being
modified or dropped.
You might get one of the following errors in the query editor:
Your local network settings might be preventing the Quer y Editor from issuing queries.
Please click here for instructions on how to configure your network settings.
A connection to the ser ver could not be established. This might indicate an issue with
your local firewall configuration or your network proxy settings.
These errors occur because the query editor is unable to communicate through ports 443 and 1443. You
need to enable outbound HTTPS traffic on these ports. The following instructions walk you through this
process, depending on your OS. Your corporate IT department might need to grant approval to open this
connection on your local network.
For Windows:
1. Open Windows Defender Firewall .
2. On the left menu, select Advanced settings .
3. In Windows Defender Firewall with Advanced Security , select Outbound rules on the left
menu.
4. Select New Rule on the right menu.
5. In the New outbound rule wizard , follow these steps:
a.Select por t as the type of rule you want to create, and then select Next .
b.Select TCP .
c.Select Specific remote por ts , enter 443, 1443, and then select Next .
d.Select Allow the connection if it is secure , select Next , and then select Next again.
e.Keep Domain , Private , and Public selected.
f.Give the rule a name, for example Access Azure SQL query editor, and optionally provide a
description. Then select Finish .
For MacOS:
1. On the Apple menu, open System Preferences .
2. Select Security & Privacy , and then select Firewall .
3. If Firewall is off, select Click the lock to make changes at the bottom, and select Turn on
Firewall .
4. Select Firewall Options .
5. In the Security & Privacy window, select Automatically allow signed software to receive
incoming connections .
For Linux:
Run these commands to update iptables :

sudo iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT


sudo iptables -A OUTPUT -p tcp --dport 1443 -j ACCEPT
Next steps
What is Azure SQL?
Azure SQL glossary of terms
T-SQL differences between SQL Server and Azure SQL Database
Quickstart: Use SSMS to connect to and query Azure SQL Database or Azure SQL Managed Instance
Quickstart: Use Visual Studio Code to connect and query
Quickstart: Use Azure Data Studio to connect and query Azure SQL Database
Quickstart: Use Visual Studio Code to connect and
query
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Visual Studio Code is a graphical code editor for Linux, macOS, and Windows. It supports extensions, including
the mssql extension for querying a SQL Server instance, Azure SQL Database, an Azure SQL Managed Instance,
and a database in Azure Synapse Analytics. In this quickstart, you'll use Visual Studio Code to connect to Azure
SQL Database or Azure SQL Managed Instance and then run Transact-SQL statements to query, insert, update,
and delete data.

Prerequisites
A database in Azure SQL Database or Azure SQL Managed Instance. You can use one of these quickstarts
to create and then configure a database in Azure SQL Database:

A C T IO N A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Create Portal Portal

CLI CLI

PowerShell PowerShell

Configure Server-level IP firewall rule) Connectivity from a virtual machine


(VM)

Connectivity from on-premises

Load data Adventure Works loaded per Restore Wide World Importers
quickstart

Restore or import Adventure Works


from a BACPAC file from GitHub

IMPORTANT
The scripts in this article are written to use the Adventure Works database. With a SQL Managed Instance, you
must either import the Adventure Works database into an instance database or modify the scripts in this article
to use the Wide World Importers database.

Install Visual Studio Code


Make sure you have installed the latest Visual Studio Code and loaded the mssql extension. For guidance on
installing the mssql extension, see Install Visual Studio Code and mssql for Visual Studio Code .

Configure Visual Studio Code


macOS
For macOS, you need to install OpenSSL, which is a prerequisite for .NET Core that mssql extension uses. Open
your terminal and enter the following commands to install brew and OpenSSL .

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"


brew update
brew install openssl
mkdir -p /usr/local/lib
ln -s /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib /usr/local/lib/
ln -s /usr/local/opt/openssl/lib/libssl.1.0.0.dylib /usr/local/lib/

Linux (Ubuntu)
No special configuration needed.
Windows
No special configuration needed.

Get server connection information


Get the connection information you need to connect to Azure SQL Database. You'll need the fully qualified
server name or host name, database name, and login information for the upcoming procedures.
1. Sign in to the Azure portal.
2. Navigate to the SQL databases or SQL Managed Instances page.
3. On the Over view page, review the fully qualified server name next to Ser ver name for SQL Database
or the fully qualified server name next to Host for a SQL Managed Instance. To copy the server name or
host name, hover over it and select the Copy icon.

Set language mode to SQL


In Visual Studio Code, set the language mode to SQL to enable mssql commands and T-SQL IntelliSense.
1. Open a new Visual Studio Code window.
2. Press Ctrl +N . A new plain text file opens.
3. Select Plain Text in the status bar's lower right-hand corner.
4. In the Select language mode drop-down menu that opens, select SQL .

Connect to your database


Use Visual Studio Code to establish a connection to your server.

IMPORTANT
Before continuing, make sure that you have your server and sign in information ready. Once you begin entering the
connection profile information, if you change your focus from Visual Studio Code, you have to restart creating the profile.

1. In Visual Studio Code, press Ctrl+Shift+P (or F1 ) to open the Command Palette.
2. Select MS SQL:Connect and choose Enter .
3. Select Create Connection Profile .
4. Follow the prompts to specify the new profile's connection properties. After specifying each value, choose
Enter to continue.

P RO P ERT Y SUGGEST ED VA L UE DESC RIP T IO N

Ser ver name The fully qualified server name Something like:
mynewser ver20170313.databas
e.windows.net .

Database name mySampleDatabase The database to connect to.

Authentication SQL Login This tutorial uses SQL


Authentication.

User name User name The user name of the server admin
account used to create the server.

Password (SQL Login) Password The password of the server admin


account used to create the server.

Save Password? Yes or No Select Yes if you do not want to


enter the password each time.

Enter a name for this profile A profile name, such as A saved profile speeds your
mySampleProfile connection on subsequent logins.

If successful, a notification appears saying your profile is created and connected.

Query data
Run the following SELECT Transact-SQL statement to query for the top 20 products by category.
1. In the editor window, paste the following SQL query.

SELECT pc.Name as CategoryName, p.name as ProductName


FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid;

2. Press Ctrl +Shift +E to run the query and display results from the Product and ProductCategory tables.
Insert data
Run the following INSERT Transact-SQL statement to add a new product into the SalesLT.Product table.
1. Replace the previous query with this one.

INSERT INTO [SalesLT].[Product]


( [Name]
, [ProductNumber]
, [Color]
, [ProductCategoryID]
, [StandardCost]
, [ListPrice]
, [SellStartDate]
)
VALUES
('myNewProduct'
,123456789
,'NewColor'
,1
,100
,100
,GETDATE() );

2. Press Ctrl +Shift +E to insert a new row in the Product table.

Update data
Run the following UPDATE Transact-SQL statement to update the added product.
1. Replace the previous query with this one:

UPDATE [SalesLT].[Product]
SET [ListPrice] = 125
WHERE Name = 'myNewProduct';

2. Press Ctrl +Shift +E to update the specified row in the Product table.

Delete data
Run the following DELETE Transact-SQL statement to remove the new product.
1. Replace the previous query with this one:

DELETE FROM [SalesLT].[Product]


WHERE Name = 'myNewProduct';

2. Press Ctrl +Shift +E to delete the specified row in the Product table.

Next steps
To connect and query using SQL Server Management Studio, see Quickstart: Use SQL Server Management
Studio to connect to a database in Azure SQL Database and query data.
To connect and query using the Azure portal, see Quickstart: Use the SQL Query editor in the Azure portal to
connect and query data.
For an MSDN magazine article on using Visual Studio Code, see Create a database IDE with MSSQL
extension blog post.
Connect to Azure SQL Database with Azure AD
Multi-Factor Authentication
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides a C# program that connects to Azure SQL Database. The program uses interactive mode
authentication, which supports Azure AD Multi-Factor Authentication.
For more information about Multi-Factor Authentication support for SQL tools, see Using multi-factor Azure
Active Directory authentication.

Multi-Factor Authentication for Azure SQL Database


Active Directory Interactive authentication supports multi-factor authentication using
Microsoft.Data.SqlClient to connect to Azure SQL data sources. In a client C# program, the enum value directs
the system to use the Azure Active Directory (Azure AD) interactive mode that supports Multi-Factor
Authentication to connect to Azure SQL Database. The user who runs the program sees the following dialog
boxes:
A dialog box that displays an Azure AD user name and asks for the user's password.
If the user's domain is federated with Azure AD, the dialog box doesn't appear, because no password is
needed.
If the Azure AD policy imposes Multi-Factor Authentication on the user, a dialog box to sign in to your
account will display.
The first time a user goes through Multi-Factor Authentication, the system displays a dialog box that asks
for a mobile phone number to send text messages to. Each message provides the verification code that
the user must enter in the next dialog box.
A dialog box that asks for a Multi-Factor Authentication verification code, which the system has sent to a
mobile phone.
For information about how to configure Azure AD to require Multi-Factor Authentication, see Getting started
with Azure AD Multi-Factor Authentication in the cloud.
For screenshots of these dialog boxes, see Configure multi-factor authentication for SQL Server Management
Studio and Azure AD.

TIP
You can search .NET Framework APIs with the .NET API Browser tool page.
You can also search directly with the optional ?term=<search value> parameter.

Prerequisite
Before you begin, you should have a logical SQL server created and available.
Set an Azure AD admin for your server
For the C# example to run, a logical SQL server admin needs to assign an Azure AD admin for your server.
On the SQL ser ver page, select Active Director y admin > Set admin .
For more information about Azure AD admins and users for Azure SQL Database, see the screenshots in
Configure and manage Azure Active Directory authentication with SQL Database.

Microsoft.Data.SqlClient
The C# example relies on the Microsoft.Data.SqlClient namespace. For more information, see Using Azure Active
Directory authentication with SqlClient.

NOTE
System.Data.SqlClient uses the Azure Active Directory Authentication Library (ADAL), which will be deprecated. If you're
using the System.Data.SqlClient namespace for Azure Active Directory authentication, migrate applications to
Microsoft.Data.SqlClient and the Microsoft Authentication Library (MSAL). For more information about using Azure AD
authentication with SqlClient, see Using Azure Active Directory authentication with SqlClient.

Verify with SQL Server Management Studio


Before you run the C# example, it's a good idea to check that your setup and configurations are correct in SQL
Server Management Studio (SSMS). Any C# program failure can then be narrowed to source code.
Verify server-level firewall IP addresses
Run SSMS from the same computer, in the same building, where you plan to run the C# example. For this test,
any Authentication mode is OK. If there's any indication that the server isn't accepting your IP address, see
server-level and database-level firewall rules for help.
Verify Azure Active Directory Multi-Factor Authentication
Run SSMS again, this time with Authentication set to Azure Active Director y - Universal with MFA . This
option requires SSMS version 17.5 or later.
For more information, see Configure Multi-Factor Authentication for SSMS and Azure AD.

NOTE
If you are a guest user in the database, you also need to provide the Azure AD domain name for the database: Select
Options > AD domain name or tenant ID . If you are running SSMS 18.x or later, the AD domain name or tenant ID
is no longer needed for guest users because 18.x or later automatically recognizes it.
To find the domain name in the Azure portal, select Azure Active Director y > Custom domain names . In the C#
example program, providing a domain name is not necessary.

C# code example
NOTE
If you are using .NET Core, you will want to use the Microsoft.Data.SqlClient namespace. For more information, see the
following blog.

This is an example of C# source code.


using System;
using Microsoft.Data.SqlClient;

public class Program


{
public static void Main(string[] args)
{
// Use your own server, database, and user ID.
// Connetion string - user ID is not provided and is asked interactively.
string ConnectionString = @"Server=<your server>.database.windows.net; Authentication=Active
Directory Interactive; Database=<your database>";

using (SqlConnection conn = new SqlConnection(ConnectionString))

{
conn.Open();
Console.WriteLine("ConnectionString2 succeeded.");
using (var cmd = new SqlCommand("SELECT @@Version", conn))
{
Console.WriteLine("select @@version");
var result = cmd.ExecuteScalar();
Console.WriteLine(result.ToString());
}

}
Console.ReadKey();

}
}

This is an example of the C# test output.

ConnectionString2 succeeded.
select @@version
Microsoft SQL Azure (RTM) - 12.0.2000.8
...

Next steps
Azure Active Directory server principals
Azure AD-only authentication with Azure SQL
Using multi-factor Azure Active Directory authentication
Use Java and JDBC with Azure SQL Database
7/12/2022 • 9 minutes to read • Edit Online

This topic demonstrates creating a sample application that uses Java and JDBC to store and retrieve information
in Azure SQL Database.
JDBC is the standard Java API to connect to traditional relational databases.

Prerequisites
An Azure account. If you don't have one, get a free trial.
Azure Cloud Shell or Azure CLI. We recommend Azure Cloud Shell so you'll be logged in automatically and
have access to all the tools you'll need.
A supported Java Development Kit, version 8 (included in Azure Cloud Shell).
The Apache Maven build tool.

Prepare the working environment


We are going to use environment variables to limit typing mistakes, and to make it easier for you to customize
the following configuration for your specific needs.
Set up those environment variables by using the following commands:

AZ_RESOURCE_GROUP=database-workshop
AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
AZ_LOCATION=<YOUR_AZURE_REGION>
AZ_SQL_SERVER_USERNAME=demo
AZ_SQL_SERVER_PASSWORD=<YOUR_AZURE_SQL_PASSWORD>
AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>

Replace the placeholders with the following values, which are used throughout this article:
<YOUR_DATABASE_NAME> : The name of your Azure SQL Database server. It should be unique across Azure.
<YOUR_AZURE_REGION> : The Azure region you'll use. You can use eastus by default, but we recommend that
you configure a region closer to where you live. You can have the full list of available regions by entering
az account list-locations .
<AZ_SQL_SERVER_PASSWORD> : The password of your Azure SQL Database server. That password should have a
minimum of eight characters. The characters should be from three of the following categories: English
uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and
so on).
<YOUR_LOCAL_IP_ADDRESS> : The IP address of your local computer, from which you'll run your Java application.
One convenient way to find it is to point your browser to whatismyip.akamai.com.
Next, create a resource group using the following command:

az group create \
--name $AZ_RESOURCE_GROUP \
--location $AZ_LOCATION \
| jq
NOTE
We use the jq utility to display JSON data and make it more readable. This utility is installed by default on Azure Cloud
Shell. If you don't like that utility, you can safely remove the | jq part of all the commands we'll use.

Create an Azure SQL Database instance


The first thing we'll create is a managed Azure SQL Database server.

NOTE
You can read more detailed information about creating Azure SQL Database servers in Quickstart: Create an Azure SQL
Database single database.

In Azure Cloud Shell, run the following command:

az sql server create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME \
--location $AZ_LOCATION \
--admin-user $AZ_SQL_SERVER_USERNAME \
--admin-password $AZ_SQL_SERVER_PASSWORD \
| jq

This command creates an Azure SQL Database server.


Configure a firewall rule for your Azure SQL Database server
Azure SQL Database instances are secured by default. They have a firewall that doesn't allow any incoming
connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to
access the database server.
Because you configured our local IP address at the beginning of this article, you can open the server's firewall by
running the following command:

az sql server firewall-rule create \


--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DATABASE_NAME-database-allow-local-ip \
--server $AZ_DATABASE_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \
--end-ip-address $AZ_LOCAL_IP_ADDRESS \
| jq

Configure a Azure SQL database


The Azure SQL Database server that you created earlier is empty. It doesn't have any database that you can use
with the Java application. Create a new database called demo by running the following command:

az sql db create \
--resource-group $AZ_RESOURCE_GROUP \
--name demo \
--server $AZ_DATABASE_NAME \
| jq

Create a new Java project


Using your favorite IDE, create a new Java project, and add a pom.xml file in its root directory:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>demo</name>

<properties>
<java.version>1.8</java.version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>

<dependencies>
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>7.4.1.jre8</version>
</dependency>
</dependencies>
</project>

This file is an Apache Maven that configures our project to use:


Java 8
A recent SQL Server driver for Java
Prepare a configuration file to connect to Azure SQL database
Create a src/main/resources/application.properties file, and add:

url=jdbc:sqlserver://$AZ_DATABASE_NAME.database.windows.net:1433;database=demo;encrypt=true;trustServerCerti
ficate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;
user=demo@$AZ_DATABASE_NAME
password=$AZ_SQL_SERVER_PASSWORD

Replace the two $AZ_DATABASE_NAME variables with the value that you configured at the beginning of this
article.
Replace the $AZ_SQL_SERVER_PASSWORD variable with the value that you configured at the beginning of this
article.
Create an SQL file to generate the database schema
We will use a src/main/resources/ schema.sql file in order to create a database schema. Create that file, with the
following content:

DROP TABLE IF EXISTS todo;


CREATE TABLE todo (id INT PRIMARY KEY, description VARCHAR(255), details VARCHAR(4096), done BIT);

Code the application


Connect to the database
Next, add the Java code that will use JDBC to store and retrieve data from your Azure SQL database.
Create a src/main/java/DemoApplication.java file, that contains:
package com.example.demo;

import java.sql.*;
import java.util.*;
import java.util.logging.Logger;

public class DemoApplication {

private static final Logger log;

static {
System.setProperty("java.util.logging.SimpleFormatter.format", "[%4$-7s] %5$s %n");
log =Logger.getLogger(DemoApplication.class.getName());
}

public static void main(String[] args) throws Exception {


log.info("Loading application properties");
Properties properties = new Properties();

properties.load(DemoApplication.class.getClassLoader().getResourceAsStream("application.properties"));

log.info("Connecting to the database");


Connection connection = DriverManager.getConnection(properties.getProperty("url"), properties);
log.info("Database connection test: " + connection.getCatalog());

log.info("Create database schema");


Scanner scanner = new
Scanner(DemoApplication.class.getClassLoader().getResourceAsStream("schema.sql"));
Statement statement = connection.createStatement();
while (scanner.hasNextLine()) {
statement.execute(scanner.nextLine());
}

/*
Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);
todo = readData(connection);
todo.setDetails("congratulations, you have updated data!");
updateData(todo, connection);
deleteData(todo, connection);
*/

log.info("Closing database connection");


connection.close();
}
}

This Java code will use the application.properties and the schema.sql files that we created earlier, in order to
connect to the SQL Server database and create a schema that will store our data.
In this file, you can see that we commented methods to insert, read, update and delete data: we will code those
methods in the rest of this article, and you will be able to uncomment them one after each other.

NOTE
The database credentials are stored in the user and password properties of the application.properties file. Those
credentials are used when executing DriverManager.getConnection(properties.getProperty("url"), properties); ,
as the properties file is passed as an argument.

You can now execute this main class with your favorite tool:
Using your IDE, you should be able to right-click on the DemoApplication class and execute it.
Using Maven, you can run the application by executing:
mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication" .

The application should connect to the Azure SQL Database, create a database schema, and then close the
connection, as you should see in the console logs:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Closing database connection

Create a domain class


Create a new Todo Java class, next to the DemoApplication class, and add the following code:
package com.example.demo;

public class Todo {

private Long id;


private String description;
private String details;
private boolean done;

public Todo() {
}

public Todo(Long id, String description, String details, boolean done) {


this.id = id;
this.description = description;
this.details = details;
this.done = done;
}

public Long getId() {


return id;
}

public void setId(Long id) {


this.id = id;
}

public String getDescription() {


return description;
}

public void setDescription(String description) {


this.description = description;
}

public String getDetails() {


return details;
}

public void setDetails(String details) {


this.details = details;
}

public boolean isDone() {


return done;
}

public void setDone(boolean done) {


this.done = done;
}

@Override
public String toString() {
return "Todo{" +
"id=" + id +
", description='" + description + '\'' +
", details='" + details + '\'' +
", done=" + done +
'}';
}
}

This class is a domain model mapped on the todo table that you created when executing the schema.sql script.
Insert data into Azure SQL database
In the src/main/java/DemoApplication.java file, after the main method, add the following method to insert data
into the database:

private static void insertData(Todo todo, Connection connection) throws SQLException {


log.info("Insert data");
PreparedStatement insertStatement = connection
.prepareStatement("INSERT INTO todo (id, description, details, done) VALUES (?, ?, ?, ?);");

insertStatement.setLong(1, todo.getId());
insertStatement.setString(2, todo.getDescription());
insertStatement.setString(3, todo.getDetails());
insertStatement.setBoolean(4, todo.isDone());
insertStatement.executeUpdate();
}

You can now uncomment the two following lines in the main method:

Todo todo = new Todo(1L, "configuration", "congratulations, you have set up JDBC correctly!", true);
insertData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Closing database connection

Reading data from Azure SQL database


Let's read the data previously inserted, to validate that our code works correctly.
In the src/main/java/DemoApplication.java file, after the insertData method, add the following method to read
data from the database:

private static Todo readData(Connection connection) throws SQLException {


log.info("Read data");
PreparedStatement readStatement = connection.prepareStatement("SELECT * FROM todo;");
ResultSet resultSet = readStatement.executeQuery();
if (!resultSet.next()) {
log.info("There is no data in the database!");
return null;
}
Todo todo = new Todo();
todo.setId(resultSet.getLong("id"));
todo.setDescription(resultSet.getString("description"));
todo.setDetails(resultSet.getString("details"));
todo.setDone(resultSet.getBoolean("done"));
log.info("Data read from the database: " + todo.toString());
return todo;
}

You can now uncomment the following line in the main method:

todo = readData(connection);

Executing the main class should now produce the following output:
[INFO ] Loading application properties
[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Closing database connection

Updating data in Azure SQL Database


Let's update the data we previously inserted.
Still in the src/main/java/DemoApplication.java file, after the readData method, add the following method to
update data inside the database:

private static void updateData(Todo todo, Connection connection) throws SQLException {


log.info("Update data");
PreparedStatement updateStatement = connection
.prepareStatement("UPDATE todo SET description = ?, details = ?, done = ? WHERE id = ?;");

updateStatement.setString(1, todo.getDescription());
updateStatement.setString(2, todo.getDetails());
updateStatement.setBoolean(3, todo.isDone());
updateStatement.setLong(4, todo.getId());
updateStatement.executeUpdate();
readData(connection);
}

You can now uncomment the two following lines in the main method:

todo.setDetails("congratulations, you have updated data!");


updateData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Update data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have updated data!', done=true}
[INFO ] Closing database connection

Deleting data in Azure SQL database


Finally, let's delete the data we previously inserted.
Still in the src/main/java/DemoApplication.java file, after the updateData method, add the following method to
delete data inside the database:
private static void deleteData(Todo todo, Connection connection) throws SQLException {
log.info("Delete data");
PreparedStatement deleteStatement = connection.prepareStatement("DELETE FROM todo WHERE id = ?;");
deleteStatement.setLong(1, todo.getId());
deleteStatement.executeUpdate();
readData(connection);
}

You can now uncomment the following line in the main method:

deleteData(todo, connection);

Executing the main class should now produce the following output:

[INFO ] Loading application properties


[INFO ] Connecting to the database
[INFO ] Database connection test: demo
[INFO ] Create database schema
[INFO ] Insert data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have set up JDBC correctly!', done=true}
[INFO ] Update data
[INFO ] Read data
[INFO ] Data read from the database: Todo{id=1, description='configuration', details='congratulations, you
have updated data!', done=true}
[INFO ] Delete data
[INFO ] Read data
[INFO ] There is no data in the database!
[INFO ] Closing database connection

Conclusion and resources clean up


Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure SQL
database.
To clean up all resources used during this quickstart, delete the resource group using the following command:

az group delete \
--name $AZ_RESOURCE_GROUP \
--yes

Next steps
Design your first database in Azure SQL Database
Microsoft JDBC Driver for SQL Server
Report issues/ask questions
Set up a local development environment for Azure
SQL Database
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article teaches you to set up the local development experience for Azure SQL Database. The local
development experience for Azure SQL Database enables developers and database professionals to design, edit,
build/validate, publish, and run database schemas for databases in Azure SQL Database using a containerized
environment.

Prerequisites
Before you configure the local development environment for Azure SQL Database, make sure you have met the
following hardware and software requirements:
Software requirements:
Currently supported on Windows 10 or later release, macOS Mojave or later release, and Linux
(preferably Ubuntu 18.04 or later release)
Azure Data Studio or VSCode
Minimum hardware requirements:
8 GB RAM
10 GB available disk space

Install Docker Desktop


The local development environment for Azure SQL Database uses the Azure SQL Database emulator, a
containerized database with close fidelity to the Azure SQL Database public service. The Azure SQL Database
emulator is implemented as a Docker container.
Install Docker Desktop. If you are using Windows, set up Docker Desktop for Windows with WSL 2.
Ensure that Docker Desktop is running before using your local development environment for Azure SQL
Database.

Install extension
There are different extensions to install depending on your preferred development tool.

EXT EN SIO N VISUA L ST UDIO C O DE A Z URE DATA ST UDIO

The mssql extension for Visual Studio Install the mssql extension. Installation is not necessary as the
Code extension as the functionality is
natively available.
EXT EN SIO N VISUA L ST UDIO C O DE A Z URE DATA ST UDIO

SQL Database Projects extension Installation as not necessary as the Install the SQL Database Projects
(Preview) SQL Database Projects extension is extension.
bundled with the mssql extension and
is automatically installed and updated
when the mssql extension is installed
or updated.

Visual Studio Code


Azure Data Studio

If you are using VSCode, install the mssql extension for Visual Studio Code.
The mssql extension enables you to connect and run queries and test scripts against a database. The database
may be running in the Azure SQL Database emulator locally, or it may be a database in the global Azure SQL
Database service.
To install the extension:
1. In VSCode, select View > Command Palette , or press Ctrl +Shift +P , or press F1 to open the
Command Palette .
2. In the Command Palette , select Extensions: Install Extensions from the dropdown.
3. In the Extensions pane, type mssql .
4. Select the SQL Ser ver (mssql) extension, and then select Install .
5. After the installation completes, select Reload to enable the extension.

Begin using your local development environment


You have now set up your local development environment for Azure SQL Database. Next, Create a database
project for a local Azure SQL Database development environment.

Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Create a database project for a local Azure SQL Database development environment
Publish a database project for Azure SQL Database to the local emulator
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Create a project for a local Azure SQL Database
development environment
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Azure SQL Database local development experience empowers application developers and database
professionals to design, edit, build/validate, publish, and run database schemas for databases directly on their
workstation using an Azure SQL Database containerized environment. As part of this workflow, you will create a
SQL Database Project. The SQL Database Project extension allows you to create a new blank project, create a
new project from a database, and open previously created projects.

Prerequisites
Before creating or opening a SQL Database project, follow the steps in Set up a local development environment
for Azure SQL Database to configure your environment.

Create a new project


In the Projects view select the New Project button and enter a project name in the text input that appears. In
the Select a Folder dialog that appears, choose a directory for the project's folder, .sqlproj file, and other
contents to reside in.
The empty project is opened and visible in the Projects view for editing.

Create a project from Azure SQL Database


In the Project view, select the Impor t Project from Database button and connect to a database in Azure SQL
Database. Once connected, select a database from the list of available databases and set the name of the project.
Finally, select a target structure of the extraction. The new project opens and contains SQL scripts for the
contents of the selected database.

Open an existing project


In the Projects view, select Open Project and open an existing .sqlproj file from the file picker that appears.
Existing projects can originate from Azure Data Studio, Visual Studio Code or Visual Studio SQL Server Data
Tools.
The existing project opens and its contents are visible in the Projects view for editing.

Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Quickstart: Create a local development environment for Azure SQL Database
Publish a database project for Azure SQL Database to the local emulator
Introducing the Azure SQL Database emulator
Publish a Database Project for Azure SQL Database
to the local emulator
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides steps to build and publish a Database Project to the Azure SQL Database emulator
(preview).

Overview
The Azure SQL Database local development experience allow users to source control Database Projects and
work offline when needed. The local development experience uses the Azure SQL Database emulator, a
containerized database with close fidelity with the Azure SQL Database public service, as runtime host for
Database Projects that can be published and tested locally as part of developer's inner loop. This article
describes how to publish a Database Project to the local emulator.

Prerequisites
Before you can publish a Database Project to the local emulator, you must:
Follow the steps in Set up a local development environment for Azure SQL Database to configure your
environment.
Create a Database Project by following the steps in Create a SQL Database Project for a local Azure SQL
Database development environment.

Build and publish a Database Project


You must first build your Database Project before publishing. To complete this process:
1. First, follow the steps in Build a Database Project.
2. Then follow the steps in Publish the SQL project and deploy to a local Container.

Next steps
Learn more about the local development experience for Azure SQL Database:
What is the local development experience for Azure SQL Database?
Set up a local development environment for Azure SQL Database
Create a Database Project for a local Azure SQL Database development environment
Quickstart: Create a local development environment for Azure SQL Database
Introducing the Azure SQL Database emulator
Create and manage servers and single databases in
Azure SQL Database
7/12/2022 • 7 minutes to read • Edit Online

You can create and manage servers and single databases in Azure SQL Database using the Azure portal,
PowerShell, the Azure CLI, REST API, and Transact-SQL.

The Azure portal


You can create the resource group for Azure SQL Database ahead of time or while creating the server itself.
Create a server
To create a server using the Azure portal, create a new server resource from Azure Marketplace. Alternatively,
you can create the server when you deploy an Azure SQL Database.

Create a blank or sample database


To create a single Azure SQL Database using the Azure portal, choose the Azure SQL Database resource in Azure
Marketplace. You can create the resource group and server ahead of time or while creating the single database
itself. You can create a blank database or create a sample database based on Adventure Works LT.
IMPORTANT
For information on selecting the pricing tier for your database, see DTU-based purchasing model and vCore-based
purchasing model.

Manage an existing server


To manage an existing server, navigate to the server using a number of methods - such as from a specific
database page, the SQL ser vers page, or the All resources page.
To manage an existing database, navigate to the SQL databases page and select the database you wish to
manage. The following screenshot shows how to begin setting a server-level firewall for a database from the
Over view page for a database.
IMPORTANT
To configure performance properties for a database, see DTU-based purchasing model and vCore-based purchasing
model.

TIP
For an Azure portal quickstart, see Create a database in SQL Database in the Azure portal.

PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

To create and manage servers, single and pooled databases, and server-level firewalls with Azure PowerShell,
use the following PowerShell cmdlets. If you need to install or upgrade PowerShell, see Install Azure PowerShell
module.

TIP
For PowerShell example scripts, see Use PowerShell to create a database in SQL Database and configure a server-level
firewall rule and Monitor and scale a database in SQL Database using PowerShell.
C M DL ET DESC RIP T IO N

New-AzSqlDatabase Creates a database

Get-AzSqlDatabase Gets one or more databases

Set-AzSqlDatabase Sets properties for a database, or moves an existing


database into an elastic pool

Remove-AzSqlDatabase Removes a database

New-AzResourceGroup Creates a resource group

New-AzSqlServer Creates a server

Get-AzSqlServer Returns information about servers

Set-AzSqlServer Modifies properties of a server

Remove-AzSqlServer Removes a server

New-AzSqlServerFirewallRule Creates a server-level firewall rule

Get-AzSqlServerFirewallRule Gets firewall rules for a server

Set-AzSqlServerFirewallRule Modifies a firewall rule in a server

Remove-AzSqlServerFirewallRule Deletes a firewall rule from a server.

New-AzSqlServerVirtualNetworkRule Creates a virtual network rule, based on a subnet that is a


Virtual Network service endpoint.

Azure CLI
To create and manage the servers, databases, and firewalls with Azure CLI, use the following Azure CLI
commands. Use the Cloud Shell to run Azure CLI in your browser, or install it on macOS, Linux, or Windows. For
creating and managing elastic pools, see Elastic pools.

TIP
For an Azure CLI quickstart, see Create a single Azure SQL Database using Azure CLI. For Azure CLI example scripts, see
Use CLI to create a database in Azure SQL Database and configure a SQL Database firewall rule and Use CLI to monitor
and scale a database in Azure SQL Database.

C M DL ET DESC RIP T IO N

az sql db create Creates a database

az sql db list Lists all databases and data warehouses in a server, or all
databases in an elastic pool

az sql db list-editions Lists available service objectives and storage limits


C M DL ET DESC RIP T IO N

az sql db list-usages Returns database usages

az sql db show Gets a database or data warehouse

az sql db update Updates a database

az sql db delete Removes a database

az group create Creates a resource group

az sql server create Creates a server

az sql server list Lists servers

az sql server list-usages Returns server usages

az sql server show Gets a server

az sql server update Updates a server

az sql server delete Deletes a server

az sql server firewall-rule create Creates a server firewall rule

az sql server firewall-rule list Lists the firewall rules on a server

az sql server firewall-rule show Shows the detail of a firewall rule

az sql server firewall-rule update Updates a firewall rule

az sql server firewall-rule delete Deletes a firewall rule

Transact-SQL (T-SQL)
To create and manage the servers, databases, and firewalls with Transact-SQL, use the following T-SQL
commands. You can issue these commands using the Azure portal, SQL Server Management Studio, Visual
Studio Code, or any other program that can connect to a server in SQL Database and pass Transact-SQL
commands. For managing elastic pools, see Elastic pools.

TIP
For a quickstart using SQL Server Management Studio on Microsoft Windows, see Azure SQL Database: Use SQL Server
Management Studio to connect and query data. For a quickstart using Visual Studio Code on the macOS, Linux, or
Windows, see Azure SQL Database: Use Visual Studio Code to connect and query data.

IMPORTANT
You cannot create or delete a server using Transact-SQL.
C OMMAND DESC RIP T IO N

CREATE DATABASE Creates a new single database. You must be connected to


the master database to create a new database.

ALTER DATABASE Modifies a database or elastic pool.

DROP DATABASE Deletes a database.

sys.database_service_objectives Returns the edition (service tier), service objective (pricing


tier), and elastic pool name, if any, for Azure SQL Database
or a dedicated SQL pool in Azure Synapse Analytics. If
logged on to the master database in a server in SQL
Database, returns information on all databases. For Azure
Synapse Analytics, you must be connected to the master
database.

sys.dm_db_resource_stats Returns CPU, IO, and memory consumption for a database


in Azure SQL Database. One row exists for every 15 seconds,
even if there's no activity in the database.

sys.resource_stats Returns CPU usage and storage data for a database in Azure
SQL Database. The data is collected and aggregated within
five-minute intervals.

sys.database_connection_stats Contains statistics for SQL Database connectivity events,


providing an overview of database connection successes and
failures.

sys.event_log Returns successful Azure SQL Database connections and


connection failures. You can use this information to track or
troubleshoot your database activity with SQL Database.

sp_set_firewall_rule Creates or updates the server-level firewall settings for your


server. This stored procedure is only available in the master
database to the server-level principal login. A server-level
firewall rule can only be created using Transact-SQL after the
first server-level firewall rule has been created by a user with
Azure-level permissions

sys.firewall_rules Returns information about the server-level firewall settings


associated with your database in Azure SQL Database.

sp_delete_firewall_rule Removes server-level firewall settings from your server. This


stored procedure is only available in the master database to
the server-level principal login.

sp_set_database_firewall_rule Creates or updates the database-level firewall rules for your


database in Azure SQL Database. Database firewall rules can
be configured for the master database, and for user
databases on SQL Database. Database firewall rules are
useful when using contained database users.

sys.database_firewall_rules Returns information about the database-level firewall


settings associated with your database in Azure SQL
Database.
C OMMAND DESC RIP T IO N

sp_delete_database_firewall_rule Removes database-level firewall setting from a database.

REST API
To create and manage the servers, databases, and firewalls, use these REST API requests.

C OMMAND DESC RIP T IO N

Servers - Create or update Creates or updates a new server.

Servers - Delete Deletes a SQL server.

Servers - Get Gets a server.

Servers - List Returns a list of servers in a subscription.

Servers - List by resource group Returns a list of servers in a resource group.

Servers - Update Updates an existing server.

Databases - Create or update Creates a new database or updates an existing database.

Databases - Delete Deletes a database.

Databases - Get Gets a database.

Databases - List by elastic pool Returns a list of databases in an elastic pool.

Databases - List by server Returns a list of databases in a server.

Databases - Update Updates an existing database.

Firewall rules - Create or update Creates or updates a firewall rule.

Firewall rules - Delete Deletes a firewall rule.

Firewall rules - Get Gets a firewall rule.

Firewall rules - List by server Returns a list of firewall rules.

Next steps
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
PowerShell for DNS Alias to Azure SQL Database
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


This article provides a PowerShell script that demonstrates how you can manage a DNS alias for the SQL server
hosting your Azure SQL Database.

NOTE
This article has been updated to use either the Azure PowerShell Az module or Azure CLI. You can still use the AzureRM
module, which will continue to receive bug fixes until at least December 2020.
To learn more about the Az module and AzureRM compatibility, see Introducing the Azure PowerShell Az module. For
installation instructions, see Install Azure PowerShell or Install Azure CLI.

DNS alias in connection string


To connect a logical SQL server, a client such as SQL Server Management Studio (SSMS) can provide the DNS
alias name instead of the true server name. In the following example server string, the alias any-unique-alias-
name replaces the first dot-delimited node in the four node server string:
<yourServer>.database.windows.net

Prerequisites
If you want to run the demo PowerShell script given in this article, the following prerequisites apply:
An Azure subscription and account, for free trial, see Azure trials
Two servers

Example
The following code example starts by assigning literal values to several variables.
To run the code, edit the placeholder values to match real values in your system.
PowerShell
Azure CLI

The cmdlets used are the following:


New-AzSqlServerDNSAlias: Creates a DNS alias in the Azure SQL Database service system. The alias refers to
server 1.
Get-AzSqlServerDNSAlias: Get and list all the aliases assigned to server 1.
Set-AzSqlServerDNSAlias: Modifies the server name that the alias is configured to refer to, from server 1 to
server 2.
Remove-AzSqlServerDNSAlias: Remove the alias from server 2, by using the name of the alias.
To install or upgrade, see Install Azure PowerShell module.
Use Get-Module -ListAvailable Az in powershell_ise.exe, to find the version.
$subscriptionName = '<subscriptionName>';
$sqlServerDnsAliasName = '<aliasName>';
$resourceGroupName = '<resourceGroupName>';
$sqlServerName = '<sqlServerName>';
$resourceGroupName2 = '<resourceGroupNameTwo>'; # can be same or different than $resourceGroupName
$sqlServerName2 = '<sqlServerNameTwo>'; # must be different from $sqlServerName.

# login to Azure
Connect-AzAccount -SubscriptionName $subscriptionName;
$subscriptionId = Get-AzSubscription -SubscriptionName $subscriptionName;

Write-Host 'Assign an alias to server 1...';


New-AzSqlServerDnsAlias –ResourceGroupName $resourceGroupName -ServerName $sqlServerName `
-Name $sqlServerDnsAliasName;

Write-Host 'Get the aliases assigned to server 1...';


Get-AzSqlServerDnsAlias –ResourceGroupName $resourceGroupName -ServerName $sqlServerName;

Write-Host 'Move the alias from server 1 to server 2...';


Set-AzSqlServerDnsAlias –ResourceGroupName $resourceGroupName2 -TargetServerName $sqlServerName2 `
-Name $sqlServerDnsAliasName `
-SourceServerResourceGroup $resourceGroupName -SourceServerName $sqlServerName `
-SourceServerSubscriptionId $subscriptionId.Id;

Write-Host 'Get the aliases assigned to server 2...';


Get-AzSqlServerDnsAlias –ResourceGroupName $resourceGroupName2 -ServerName $sqlServerName2;

Write-Host 'Remove the alias from server 2...';


Remove-AzSqlServerDnsAlias –ResourceGroupName $resourceGroupName2 -ServerName $sqlServerName2 `
-Name $sqlServerDnsAliasName;

Next steps
For a full explanation of the DNS alias feature for SQL Database, see DNS alias for Azure SQL Database.
Manage file space for databases in Azure SQL
Database
7/12/2022 • 21 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes different types of storage space for databases in Azure SQL Database, and steps that can
be taken when the file space allocated needs to be explicitly managed.

NOTE
This article does not apply to Azure SQL Managed Instance.

Overview
With Azure SQL Database, there are workload patterns where the allocation of underlying data files for
databases can become larger than the amount of used data pages. This condition can occur when space used
increases and data is subsequently deleted. The reason is because file space allocated is not automatically
reclaimed when data is deleted.
Monitoring file space usage and shrinking data files may be necessary in the following scenarios:
Allow data growth in an elastic pool when the file space allocated for its databases reaches the pool max size.
Allow decreasing the max size of a single database or elastic pool.
Allow changing a single database or elastic pool to a different service tier or performance tier with a lower
max size.

NOTE
Shrink operations should not be considered a regular maintenance operation. Data and log files that grow due to regular,
recurring business operations do not require shrink operations.

Monitoring file space usage


Most storage space metrics displayed in the following APIs only measure the size of used data pages:
Azure Resource Manager based metrics APIs including PowerShell get-metrics
However, the following APIs also measure the size of space allocated for databases and elastic pools:
T-SQL: sys.resource_stats
T-SQL: sys.elastic_pool_resource_stats

Understanding types of storage space for a database


Understanding the following storage space quantities are important for managing the file space of a database.

DATA B A SE Q UA N T IT Y DEF IN IT IO N C O M M EN T S
DATA B A SE Q UA N T IT Y DEF IN IT IO N C O M M EN T S

Data space used The amount of space used to store Generally, space used increases
database data. (decreases) on inserts (deletes). In
some cases, the space used does not
change on inserts or deletes
depending on the amount and pattern
of data involved in the operation and
any fragmentation. For example,
deleting one row from every data page
does not necessarily decrease the
space used.

Data space allocated The amount of formatted file space The amount of space allocated grows
made available for storing database automatically, but never decreases
data. after deletes. This behavior ensures
that future inserts are faster since
space does not need to be
reformatted.

Data space allocated but unused The difference between the amount of This quantity represents the maximum
data space allocated and data space amount of free space that can be
used. reclaimed by shrinking database data
files.

Data max size The maximum amount of space that The amount of data space allocated
can be used for storing database data. cannot grow beyond the data max
size.

The following diagram illustrates the relationship between the different types of storage space for a database.

Query a single database for storage space information


The following queries can be used to determine storage space quantities for a single database.
Database data space used
Modify the following query to return the amount of database data space used. Units of the query result are in
MB.
-- Connect to master
-- Database data space used in MB
SELECT TOP 1 storage_in_megabytes AS DatabaseDataSpaceUsedInMB
FROM sys.resource_stats
WHERE database_name = 'db1'
ORDER BY end_time DESC;

Database data space allocated and unused allocated space


Use the following query to return the amount of database data space allocated and the amount of unused space
allocated. Units of the query result are in MB.

-- Connect to database
-- Database data space allocated in MB and database data space allocated unused in MB
SELECT SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB,
SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS DatabaseDataSpaceAllocatedUnusedInMB
FROM sys.database_files
GROUP BY type_desc
HAVING type_desc = 'ROWS';

Database data max size


Modify the following query to return the database data max size. Units of the query result are in bytes.

-- Connect to database
-- Database data max size in bytes
SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes;

Understanding types of storage space for an elastic pool


Understanding the following storage space quantities are important for managing the file space of an elastic
pool.

EL A ST IC P O O L Q UA N T IT Y DEF IN IT IO N C O M M EN T S

Data space used The summation of data space used by


all databases in the elastic pool.

Data space allocated The summation of data space allocated


by all databases in the elastic pool.

Data space allocated but unused The difference between the amount of This quantity represents the maximum
data space allocated and data space amount of space allocated for the
used by all databases in the elastic elastic pool that can be reclaimed by
pool. shrinking database data files.

Data max size The maximum amount of data space The space allocated for the elastic pool
that can be used by the elastic pool for should not exceed the elastic pool max
all of its databases. size. If this condition occurs, then
space allocated that is unused can be
reclaimed by shrinking database data
files.
NOTE
The error message "The elastic pool has reached its storage limit" indicates that the database objects have been allocated
enough space to meet the elastic pool storage limit, but there may be unused space in the data space allocation. Consider
increasing the elastic pool's storage limit, or as a short-term solution, freeing up data space using the Reclaim unused
allocated space section below. You should also be aware of the potential negative performance impact of shrinking
database files, see Index maintenance after shrink section below.

Query an elastic pool for storage space information


The following queries can be used to determine storage space quantities for an elastic pool.
Elastic pool data space used
Modify the following query to return the amount of elastic pool data space used. Units of the query result are in
MB.

-- Connect to master
-- Elastic pool data space used in MB
SELECT TOP 1 avg_storage_percent / 100.0 * elastic_pool_storage_limit_mb AS ElasticPoolDataSpaceUsedInMB
FROM sys.elastic_pool_resource_stats
WHERE elastic_pool_name = 'ep1'
ORDER BY end_time DESC;

Elastic pool data space allocated and unused allocated space


Modify the following examples to return a table listing the space allocated and unused allocated space for each
database in an elastic pool. The table orders databases from those databases with the greatest amount of
unused allocated space to the least amount of unused allocated space. Units of the query result are in MB.
The query results for determining the space allocated for each database in the pool can be added together to
determine the total space allocated for the elastic pool. The elastic pool space allocated should not exceed the
elastic pool max size.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The
arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about
their compatibility, see Introducing the new Azure PowerShell Az module.

The PowerShell script requires SQL Server PowerShell module – see Download PowerShell module to install.
$resourceGroupName = "<resourceGroupName>"
$serverName = "<serverName>"
$poolName = "<poolName>"
$userName = "<userName>"
$password = "<password>"

# get list of databases in elastic pool


$databasesInPool = Get-AzSqlElasticPoolDatabase -ResourceGroupName $resourceGroupName `
-ServerName $serverName -ElasticPoolName $poolName
$databaseStorageMetrics = @()

# for each database in the elastic pool, get space allocated in MB and space allocated unused in MB
foreach ($database in $databasesInPool) {
$sqlCommand = "SELECT DB_NAME() as DatabaseName, `
SUM(size/128.0) AS DatabaseDataSpaceAllocatedInMB, `
SUM(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0) AS
DatabaseDataSpaceAllocatedUnusedInMB `
FROM sys.database_files `
GROUP BY type_desc `
HAVING type_desc = 'ROWS'"
$serverFqdn = "tcp:" + $serverName + ".database.windows.net,1433"
$databaseStorageMetrics = $databaseStorageMetrics +
(Invoke-Sqlcmd -ServerInstance $serverFqdn -Database $database.DatabaseName `
-Username $userName -Password $password -Query $sqlCommand)
}

# display databases in descending order of space allocated unused


Write-Output "`n" "ElasticPoolName: $poolName"
Write-Output $databaseStorageMetrics | Sort -Property DatabaseDataSpaceAllocatedUnusedInMB -Descending |
Format-Table

The following screenshot is an example of the output of the script:

Elastic pool data max size


Modify the following T-SQL query to return the last recorded elastic pool data max size. Units of the query result
are in MB.

-- Connect to master
-- Elastic pools max size in MB
SELECT TOP 1 elastic_pool_storage_limit_mb AS ElasticPoolMaxSizeInMB
FROM sys.elastic_pool_resource_stats
WHERE elastic_pool_name = 'ep1'
ORDER BY end_time DESC;

Reclaim unused allocated space


IMPORTANT
Shrink commands impact database performance while running, and if possible should be run during periods of low usage.

Shrink data files


Because of a potential impact to database performance, Azure SQL Database does not automatically shrink data
files. However, customers may shrink data files via self-service at a time of their choosing. This should not be a
regularly scheduled operation, but rather, a one-time event in response to a major reduction in data file used
space consumption.

TIP
It is not recommended to shrink data files if regular application workload will cause the files to grow to the same allocated
size again.

In Azure SQL Database, to shrink files you can use either DBCC SHRINKDATABASE or DBCC SHRINKFILE commands:
DBCC SHRINKDATABASE shrinks all data and log files in a database using a single command. The command
shrinks one data file at a time, which can take a long time for larger databases. It also shrinks the log file,
which is usually unnecessary because Azure SQL Database shrinks log files automatically as needed.
DBCC SHRINKFILE command supports more advanced scenarios:
It can target individual files as needed, rather than shrinking all files in the database.
Each DBCC SHRINKFILE command can run in parallel with other DBCC SHRINKFILE commands to shrink
multiple files at the same time and reduce the total time of shrink, at the expense of higher resource
usage and a higher chance of blocking user queries, if they are executing during shrink.
If the tail of the file does not contain data, it can reduce allocated file size much faster by specifying the
TRUNCATEONLY argument. This does not require data movement within the file.
For more information about these shrink commands, see DBCC SHRINKDATABASE and DBCC SHRINKFILE.
The following examples must be executed while connected to the target user database, not the master
database.
To use DBCC SHRINKDATABASE to shrink all data and log files in a given database:

-- Shrink database data space allocated.


DBCC SHRINKDATABASE (N'database_name');

In Azure SQL Database, a database may have one or more data files, created automatically as data grows. To
determine file layout of your database, including the used and allocated size of each file, query the
sys.database_files catalog view using the following sample script:

-- Review file properties, including file_id and name values to reference in shrink commands
SELECT file_id,
name,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS bigint) * 8 / 1024. AS max_file_size_mb
FROM sys.database_files
WHERE type_desc IN ('ROWS','LOG');

You can execute a shrink against one file only via the DBCC SHRINKFILE command, for example:

-- Shrink database data file named 'data_0` by removing all unused at the end of the file, if any.
DBCC SHRINKFILE ('data_0', TRUNCATEONLY);
GO

Be aware of the potential negative performance impact of shrinking database files, see the Index maintenance
after shrink section below.
Shrinking transaction log file
Unlike data files, Azure SQL Database automatically shrinks transaction log file to avoid excessive space usage
that can lead to out-of-space errors. It is usually not necessary for customers to shrink the transaction log file.
In Premium and Business Critical service tiers, if the transaction log becomes large, it may significantly
contribute to local storage consumption toward the maximum local storage limit. If local storage consumption is
close to the limit, customers may choose to shrink transaction log using the DBCC SHRINKFILE command as
shown in the following example. This releases local storage as soon as the command completes, without waiting
for the periodic automatic shrink operation.
The following example should be executed while connected to the target user database, not the master database.

-- Shrink the database log file (always file_id 2), by removing all unused space at the end of the file, if
any.
DBCC SHRINKFILE (2, TRUNCATEONLY);

Auto -shrink
As an alternative to shrinking data files manually, auto-shrink can be enabled for a database. However, auto
shrink can be less effective in reclaiming file space than DBCC SHRINKDATABASE and DBCC SHRINKFILE .
By default, auto-shrink is disabled, which is recommended for most databases. If it becomes necessary to enable
auto-shrink, it is recommended to disable it once space management goals have been achieved, instead of
keeping it enabled permanently. For more information, see Considerations for AUTO_SHRINK.
For example, auto-shrink can be helpful in the specific scenario where an elastic pool contains many databases
that experience significant growth and reduction in data file space used, causing the pool to approach its
maximum size limit. This is not a common scenario.
To enable auto-shrink, execute the following command while connected to your database (not the master
database).

-- Enable auto-shrink for the current database.


ALTER DATABASE CURRENT SET AUTO_SHRINK ON;

For more information about this command, see DATABASE SET options.
Index maintenance after shrink
After a shrink operation is completed against data files, indexes may become fragmented. This reduces their
performance optimization effectiveness for certain workloads, such as queries using large scans. If performance
degradation occurs after the shrink operation is complete, consider index maintenance to rebuild indexes. Keep
in mind that index rebuilds require free space in the database, and hence may cause the allocated space to
increase, counteracting the effect of shrink.
For more information about index maintenance, see Optimize index maintenance to improve query
performance and reduce resource consumption.

Shrink large databases


When database allocated space is in hundreds of gigabytes or higher, shrink may require a significant time to
complete, often measured in hours, or days for multi-terabyte databases. There are process optimizations and
best practices you can use to make this process more efficient and less impactful to application workloads.
Capture space usage baseline
Before starting shrink, capture the current used and allocated space in each database file by executing the
following space usage query:
SELECT file_id,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS bigint) * 8 / 1024. AS space_used_mb,
CAST(size AS bigint) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS bigint) * 8 / 1024. AS max_size_mb
FROM sys.database_files
WHERE type_desc = 'ROWS';

Once shrink has completed, you can execute this query again and compare the result to the initial baseline.
Truncate data files
It is recommended to first execute shrink for each data file with the TRUNCATEONLY parameter. This way, if there is
any allocated but unused space at the end of the file, it will be removed quickly and without any data movement.
The following sample command truncates data file with file_id 4:

DBCC SHRINKFILE (4, TRUNCATEONLY);

Once this command is executed for every data file, you can rerun the space usage query to see the reduction in
allocated space, if any. You can also view allocated space for the database in Azure portal.
Evaluate index page density
If truncating data files did not result in a sufficient reduction in allocated space, you will need to shrink data files.
However, as an optional but recommended step, you should first determine average page density for indexes in
the database. For the same amount of data, shrink will complete faster if page density is high, because it will
have to move fewer pages. If page density is low for some indexes, consider performing maintenance on these
indexes to increase page density before shrinking data files. This will also let shrink achieve a deeper reduction
in allocated storage space.
To determine page density for all indexes in the database, use the following query. Page density is reported in
the avg_page_space_used_in_percent column.

SELECT OBJECT_SCHEMA_NAME(ips.object_id) AS schema_name,


OBJECT_NAME(ips.object_id) AS object_name,
i.name AS index_name,
i.type_desc AS index_type,
ips.avg_page_space_used_in_percent,
ips.avg_fragmentation_in_percent,
ips.page_count,
ips.alloc_unit_type_desc,
ips.ghost_record_count
FROM sys.dm_db_index_physical_stats(DB_ID(), default, default, default, 'SAMPLED') AS ips
INNER JOIN sys.indexes AS i
ON ips.object_id = i.object_id
AND
ips.index_id = i.index_id
ORDER BY page_count DESC;

If there are indexes with high page count that have page density lower than 60-70%, consider rebuilding or
reorganizing these indexes before shrinking data files.

NOTE
For larger databases, the query to determine page density may take a long time (hours) to complete. Additionally,
rebuilding or reorganizing large indexes also requires substantial time and resource usage. There is a tradeoff between
spending extra time on increasing page density on one hand, and reducing shrink duration and achieving higher space
savings on another.
Following is a sample command to rebuild an index and increase its page density:

ALTER INDEX [index_name] ON [schema_name].[table_name] REBUILD WITH (FILLFACTOR = 100, MAXDOP = 8, ONLINE =
ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 5 MINUTES, ABORT_AFTER_WAIT = NONE)), RESUMABLE = ON);

This command initiates an online and resumable index rebuild. This lets concurrent workloads continue using
the table while the rebuild is in progress, and lets you resume the rebuild if it gets interrupted for any reason.
However, this type of rebuild is slower than an offline rebuild, which blocks access to the table. If no other
workloads need to access the table during rebuild, set the ONLINE and RESUMABLE options to OFF and remove
the WAIT_AT_LOW_PRIORITY clause.
If there are multiple indexes with low page density, you may be able to rebuild them in parallel on multiple
database sessions to speed up the process. However, make sure that you are not approaching database resource
limits by doing so, and leave sufficient resource headroom for application workloads that may be running.
Monitor resource consumption (CPU, Data IO, Log IO) in Azure portal or using the sys.dm_db_resource_stats
view, and start additional parallel rebuilds only if resource utilization on each of these dimensions remains
substantially lower than 100%. If CPU, Data IO, or Log IO utilization is at 100%, you can scale up the database to
have more CPU cores and increase IO throughput. This may enable additional parallel rebuilds to complete the
process faster.
To learn more about index maintenance, see Optimize index maintenance to improve query performance and
reduce resource consumption.
Shrink multiple data files
As noted earlier, shrink with data movement is a long-running process. If the database has multiple data files,
you can speed up the process by shrinking multiple data files in parallel. You do this by opening multiple
database sessions, and using DBCC SHRINKFILE on each session with a different file_id value. Similar to
rebuilding indexes earlier, make sure you have sufficient resource headroom (CPU, Data IO, Log IO) before
starting each new parallel shrink command.
The following sample command shrinks data file with file_id 4, attempting to reduce its allocated size to 52000
MB by moving pages within the file:

DBCC SHRINKFILE (4, 52000);

If you want to reduce allocated space for the file to the minimum possible, execute the statement without
specifying the target size:

DBCC SHRINKFILE (4);

If a workload is running concurrently with shrink, it may start using the storage space freed by shrink before
shrink completes and truncates the file. In this case, shrink will not be able to reduce allocated space to the
specified target.
You can mitigate this by shrinking each file in smaller steps. This means that in the DBCC SHRINKFILE command,
you set the target that is slightly smaller than the current allocated space for the file, as seen in the results of
baseline space usage query. For example, if allocated space for file with file_id 4 is 200,000 MB, and you want to
shrink it to 100,000 MB, you can first set the target to 170,000 MB:

DBCC SHRINKFILE (4, 170000);

Once this command completes, it will have truncated the file and reduced its allocated size to 170,000 MB. You
can then repeat this command, setting target first to 140,000 MB, then to 110,000 MB, etc., until the file is
shrunk to the desired size. If the command completes but the file is not truncated, use smaller steps, for example
15,000 MB rather than 30,000 MB.
To monitor shrink progress for all concurrently running shrink sessions, you can use the following query:

SELECT command,
percent_complete,
status,
wait_resource,
session_id,
wait_type,
blocking_session_id,
cpu_time,
reads,
CAST(((DATEDIFF(s,start_time, GETDATE()))/3600) AS varchar) + ' hour(s), '
+ CAST((DATEDIFF(s,start_time, GETDATE())%3600)/60 AS varchar) + 'min, '
+ CAST((DATEDIFF(s,start_time, GETDATE())%60) AS varchar) + ' sec' AS running_time
FROM sys.dm_exec_requests AS r
LEFT JOIN sys.databases AS d
ON r.database_id = d.database_id
WHERE r.command IN ('DbccSpaceReclaim','DbccFilesCompact','DbccLOBCompact','DBCC');

NOTE
Shrink progress may be non-linear, and the value in the percent_complete column may remain virtually unchanged for
long periods of time, even though shrink is still in progress.

Once shrink has completed for all data files, rerun the space usage query (or check in Azure portal) to determine
the resulting reduction in allocated storage size. If is is insufficient and there is still a large difference between
used space and allocated space, you can rebuild indexes as described earlier. This may temporarily increase
allocated space further, however shrinking data files again after rebuilding indexes should result in a deeper
reduction in allocated space.

Transient errors during shrink


Occasionally, a shrink command may fail with various errors such as timeouts and deadlocks. In general, these
errors are transient, and do not occur again if the same command is repeated. If shrink fails with an error, the
progress it has made so far in moving data pages is retained, and the same shrink command can be executed
again to continue shrinking the file.
The following sample script shows how you can run shrink in a retry loop to automatically retry up to a
configurable number of times when a timeout error or a deadlock error occurs. This retry approach is applicable
to many other errors that may occur during shrink.
DECLARE @RetryCount int = 3; -- adjust to configure desired number of retries
DECLARE @Delay char(12);

-- Retry loop
WHILE @RetryCount >= 0
BEGIN

BEGIN TRY

DBCC SHRINKFILE (1); -- adjust file_id and other shrink parameters

-- Exit retry loop on successful execution


SELECT @RetryCount = -1;

END TRY
BEGIN CATCH
-- Retry for the declared number of times without raising an error if deadlocked or timed out waiting
for a lock
IF ERROR_NUMBER() IN (1205, 49516) AND @RetryCount > 0
BEGIN
SELECT @RetryCount -= 1;

PRINT CONCAT('Retry at ', SYSUTCDATETIME());

-- Wait for a random period of time between 1 and 10 seconds before retrying
SELECT @Delay = '00:00:0' + CAST(CAST(1 + RAND() * 8.999 AS decimal(5,3)) AS varchar(5));
WAITFOR DELAY @Delay;
END
ELSE -- Raise error and exit loop
BEGIN
SELECT @RetryCount = -1;
THROW;
END
END CATCH
END;

In addition to timeouts and deadlocks, shrink may encounter errors due to certain known issues.
The errors returned and mitigation steps are as follows:
Error number : 49503 , error message: %.*ls: Page %d:%d could not be moved because it is an off-row
persistent version store page. Page holdup reason: %ls. Page holdup timestamp: %I64d.
This error occurs when there are long running active transactions that have generated row versions in persistent
version store (PVS). The pages containing these row versions cannot be moved by shrink, hence it cannot make
progress and fails with this error.
To mitigate, you have to wait until these long running transactions have completed. Alternatively, you can
identify and terminate these long running transactions, but this can impact your application if it does not handle
transaction failures gracefully. One way to find long running transactions is by running the following query in
the database where you ran the shrink command:
-- Transactions sorted by duration
SELECT st.session_id,
dt.database_transaction_begin_time,
DATEDIFF(second, dt.database_transaction_begin_time, CURRENT_TIMESTAMP) AS
transaction_duration_seconds,
dt.database_transaction_log_bytes_used,
dt.database_transaction_log_bytes_reserved,
st.is_user_transaction,
st.open_transaction_count,
ib.event_type,
ib.parameters,
ib.event_info
FROM sys.dm_tran_database_transactions AS dt
INNER JOIN sys.dm_tran_session_transactions AS st
ON dt.transaction_id = st.transaction_id
OUTER APPLY sys.dm_exec_input_buffer(st.session_id, default) AS ib
WHERE dt.database_id = DB_ID()
ORDER BY transaction_duration_seconds DESC;

You can terminate a transaction by using the KILL command and specifying the associated session_id value
from query result:

KILL 4242; -- replace 4242 with the session_id value from query results

Cau t i on

Terminating a transaction may negatively impact workloads.


Once long running transactions have been terminated or have completed, an internal background task will clean
up no longer needed row versions after some time. You can monitor PVS size to gauge cleanup progress, using
the following query. Run the query in the database where you ran the shrink command:

SELECT pvss.persistent_version_store_size_kb / 1024. / 1024 AS persistent_version_store_size_gb,


pvss.online_index_version_store_size_kb / 1024. / 1024 AS online_index_version_store_size_gb,
pvss.current_aborted_transaction_count,
pvss.aborted_version_cleaner_start_time,
pvss.aborted_version_cleaner_end_time,
dt.database_transaction_begin_time AS oldest_transaction_begin_time,
asdt.session_id AS active_transaction_session_id,
asdt.elapsed_time_seconds AS active_transaction_elapsed_time_seconds
FROM sys.dm_tran_persistent_version_store_stats AS pvss
LEFT JOIN sys.dm_tran_database_transactions AS dt
ON pvss.oldest_active_transaction_id = dt.transaction_id
AND
pvss.database_id = dt.database_id
LEFT JOIN sys.dm_tran_active_snapshot_database_transactions AS asdt
ON pvss.min_transaction_timestamp = asdt.transaction_sequence_num
OR
pvss.online_index_min_transaction_timestamp = asdt.transaction_sequence_num
WHERE pvss.database_id = DB_ID();

Once PVS size reported in the persistent_version_store_size_gb column is substantially reduced compared to
its original size, rerunning shrink should succeed.
Error number : 5223 , error message: %.*ls: Empty page %d:%d could not be deallocated.
This error may occur if there are ongoing index maintenance operations such as ALTER INDEX . Retry the shrink
command after these operations are complete.
If this error persists, the associated index might have to be rebuilt. To find the index to rebuild, execute the
following query in the same database where you ran the shrink command:
SELECT OBJECT_SCHEMA_NAME(pg.object_id) AS schema_name,
OBJECT_NAME(pg.object_id) AS object_name,
i.name AS index_name,
p.partition_number
FROM sys.dm_db_page_info(DB_ID(), <file_id>, <page_id>, default) AS pg
INNER JOIN sys.indexes AS i
ON pg.object_id = i.object_id
AND
pg.index_id = i.index_id
INNER JOIN sys.partitions AS p
ON pg.partition_id = p.partition_id;

Before executing this query, replace the <file_id> and <page_id> placeholders with the actual values from the
error message you received. For example, if the message is Empty page 1:62669 could not be deallocated, then
<file_id> is 1 and <page_id> is 62669 .

Rebuild the index identified by the query, and retry the shrink command.
Error number : 5201 , error message: DBCC SHRINKDATABASE: File ID %d of database ID %d was skipped
because the file does not have enough free space to reclaim.
This error means that the data file cannot be shrunk further. You can move on to the next data file.

Next steps
For information about database max sizes, see:
Azure SQL Database vCore-based purchasing model limits for a single database
Resource limits for single databases using the DTU-based purchasing model
Azure SQL Database vCore-based purchasing model limits for elastic pools
Resources limits for elastic pools using the DTU-based purchasing model
Use Resource Health to troubleshoot connectivity
for Azure SQL Database and Azure SQL Managed
Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Resource Health for Azure SQL Database and Azure SQL Managed Instance helps you diagnose and get support
when an Azure issue impacts your SQL resources. It informs you about the current and past health of your
resources and helps you mitigate issues. Resource Health provides technical support when you need help with
Azure service issues.

Health checks
Resource Health determines the health of your SQL resource by examining the success and failure of logins to
the resource. Currently, Resource Health for your SQL Database resource only examines login failures due to
system error and not user error. The Resource Health status is updated every 1 to 2 minutes.

Health states
Available
A status of Available means that Resource Health has not detected login failures due to system errors on your
SQL resource.
Degraded
A status of Degraded means that Resource Health has detected a majority of successful logins, but some
failures as well. These are most likely transient login errors. To reduce the impact of connection issues caused by
transient login errors, implement retry logic in your code.

Unavailable
A status of Unavailable means that Resource Health has detected consistent login failures to your SQL
resource. If your resource remains in this state for an extended period of time, contact support.

Unknown
The health status of Unknown indicates that Resource Health hasn't received information about this resource
for more than 10 minutes. Although this status isn't a definitive indication of the state of the resource, it is an
important data point in the troubleshooting process. If the resource is running as expected, the status of the
resource will change to Available after a few minutes. If you're experiencing problems with the resource, the
Unknown health status might suggest that an event in the platform is affecting the resource.
Historical information
You can access up to 14 days of health history in the Health History section of Resource Health. The section will
also contain the downtime reason (when available) for the downtimes reported by Resource Health. Currently,
Azure shows the downtime for your database resource at a two-minute granularity. The actual downtime is
likely less than a minute. The average is 8 seconds.
Downtime reasons
When your database experiences downtime, analysis is performed to determine a reason. When available, the
downtime reason is reported in the Health History section of Resource Health. Downtime reasons are typically
published within 45 minutes after an event.
Planned maintenance
The Azure infrastructure periodically performs planned maintenance – the upgrade of hardware or software
components in the datacenter. While the database undergoes maintenance, Azure SQL may terminate some
existing connections and refuse new ones. The login failures experienced during planned maintenance are
typically transient, and retry logic helps reduce the impact. If you continue to experience login errors, contact
support.
Reconfiguration
Reconfigurations are considered transient conditions and are expected from time to time. These events can be
triggered by load balancing or software/hardware failures. Any client production application that connects to a
cloud database should implement a robust connection retry logic, as it would help mitigate these situations and
should generally make the errors transparent to the end user.

Next steps
Learn more about retry logic for transient errors.
Troubleshoot, diagnose, and prevent SQL connection errors.
Learn more about configuring Resource Health alerts.
Get an overview of Resource Health.
Review Resource Health FAQ.
Migrate Azure SQL Database from the DTU-based
model to the vCore-based model
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes how to migrate your database in Azure SQL Database from the DTU-based purchasing
model to the vCore-based purchasing model.

Migrate a database
Migrating a database from the DTU-based purchasing model to the vCore-based purchasing model is similar to
scaling between service objectives in the Basic, Standard, and Premium service tiers, with similar duration and a
minimal downtime at the end of the migration process. A database migrated to the vCore-based purchasing
model can be migrated back to the DTU-based purchasing model at any time in the same fashion, with the
exception of databases migrated to the Hyperscale service tier.

Choose the vCore service tier and service objective


For most DTU to vCore migration scenarios, databases and elastic pools in the Basic and Standard service tiers
will map to the General Purpose service tier. Databases and elastic pools in the Premium service tier will map to
the Business Critical service tier. Depending on application scenario and requirements, the Hyperscale service
tier can often be used as the migration target for single databases in all DTU service tiers.
To choose the service objective, or compute size, for the migrated database in the vCore model, you can use a
simple but approximate rule of thumb: every 100 DTUs in the Basic or Standard tiers require at least 1 vCore,
and every 125 DTUs in the Premium tier require at least 1 vCore.

TIP
This rule is approximate because it does not consider the specific type of hardware used for the DTU database or elastic
pool.

In the DTU model, the system may select any available hardware configuration for your database or elastic pool.
Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing
higher or lower DTU or eDTU values.
In the vCore model, customers must make an explicit choice of both the hardware configuration and the number
of vCores (logical CPUs). While DTU model does not offer these choices, hardware type and the number of
logical CPUs used for every database and elastic pool are exposed via dynamic management views. This makes
it possible to determine the matching vCore service objective more precisely.
The following approach uses this information to determine a vCore service objective with a similar allocation of
resources, to obtain a similar level of performance after migration to the vCore model.
DTU to vCore mapping
A T-SQL query below, when executed in the context of a DTU database to be migrated, returns a matching
(possibly fractional) number of vCores in each hardware configuration in the vCore model. By rounding this
number to the closest number of vCores available for databases and elastic pools in each hardware
configuration in the vCore model, customers can choose the vCore service objective that is the closest match for
their DTU database or elastic pool.
Sample migration scenarios using this approach are described in the Examples section.
Execute this query in the context of the database to be migrated, rather than in the master database. When
migrating an elastic pool, execute the query in the context of any database in the pool.

WITH dtu_vcore_map AS
(
SELECT rg.slo_name,
CAST(DATABASEPROPERTYEX(DB_NAME(), 'Edition') AS nvarchar(40)) COLLATE DATABASE_DEFAULT AS
dtu_service_tier,
CASE WHEN slo.slo_name LIKE '%SQLG4%' THEN 'Gen4'
WHEN slo.slo_name LIKE '%SQLGZ%' THEN 'Gen4'
WHEN slo.slo_name LIKE '%SQLG5%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%SQLG6%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%SQLG7%' THEN 'Gen5'
WHEN slo.slo_name LIKE '%GPGEN8%' THEN 'Gen5'
END COLLATE DATABASE_DEFAULT AS dtu_hardware_gen,
s.scheduler_count * CAST(rg.instance_cap_cpu/100. AS decimal(3,2)) AS dtu_logical_cpus,
CAST((jo.process_memory_limit_mb / s.scheduler_count) / 1024. AS decimal(4,2)) AS
dtu_memory_per_core_gb
FROM sys.dm_user_db_resource_governance AS rg
CROSS JOIN (SELECT COUNT(1) AS scheduler_count FROM sys.dm_os_schedulers WHERE status COLLATE
DATABASE_DEFAULT = 'VISIBLE ONLINE') AS s
CROSS JOIN sys.dm_os_job_object AS jo
CROSS APPLY (
SELECT UPPER(rg.slo_name) COLLATE DATABASE_DEFAULT AS slo_name
) slo
WHERE rg.dtu_limit > 0
AND
DB_NAME() COLLATE DATABASE_DEFAULT <> 'master'
AND
rg.database_id = DB_ID()
)
SELECT dtu_logical_cpus,
dtu_hardware_gen,
dtu_memory_per_core_gb,
dtu_service_tier,
CASE WHEN dtu_service_tier = 'Basic' THEN 'General Purpose'
WHEN dtu_service_tier = 'Standard' THEN 'General Purpose or Hyperscale'
WHEN dtu_service_tier = 'Premium' THEN 'Business Critical or Hyperscale'
END AS vcore_service_tier,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.7
END AS Gen4_vcores,
7 AS Gen4_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus * 1.7
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus
END AS Gen5_vcores,
5.05 AS Gen5_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.8
END AS Fsv2_vcores,
1.89 AS Fsv2_memory_per_core_gb,
CASE WHEN dtu_hardware_gen = 'Gen4' THEN dtu_logical_cpus * 1.4
WHEN dtu_hardware_gen = 'Gen5' THEN dtu_logical_cpus * 0.9
END AS M_vcores,
29.4 AS M_memory_per_core_gb
FROM dtu_vcore_map;

Additional factors
Besides the number of vCores (logical CPUs) and the type of hardware, several other factors may influence the
choice of vCore service objective:
The mapping Transact-SQL query matches DTU and vCore service objectives in terms of their CPU capacity,
therefore the results will be more accurate for CPU-bound workloads.
For the same hardware type and the same number of vCores, IOPS and transaction log throughput resource
limits for vCore databases are often higher than for DTU databases. For IO-bound workloads, it may be
possible to lower the number of vCores in the vCore model to achieve the same level of performance. Actual
resource limits for DTU and vCore databases are exposed in the sys.dm_user_db_resource_governance view.
Comparing these values between the DTU database or pool to be migrated, and a vCore database or pool
with an approximately matching service objective will help you select the vCore service objective more
precisely.
The mapping query also returns the amount of memory per core for the DTU database or elastic pool to be
migrated, and for each hardware configuration in the vCore model. Ensuring similar or higher total memory
after migration to vCore is important for workloads that require a large memory data cache to achieve
sufficient performance, or workloads that require large memory grants for query processing. For such
workloads, depending on actual performance, it may be necessary to increase the number of vCores to get
sufficient total memory.
The historical resource utilization of the DTU database should be considered when choosing the vCore
service objective. DTU databases with consistently under-utilized CPU resources may need fewer vCores than
the number returned by the mapping query. Conversely, DTU databases where consistently high CPU
utilization causes inadequate workload performance may require more vCores than returned by the query.
If migrating databases with intermittent or unpredictable usage patterns, consider the use of Serverless
compute tier. Note that the max number of concurrent workers in serverless is 75% of the limit in
provisioned compute for the same number of max vCores configured. Also, the max memory available in
serverless is 3 GB times the maximum number of vCores configured, which is less than the per-core memory
for provisioned compute. For example, on Gen5 max memory is 120 GB when 40 max vCores are configured
in serverless, vs. 204 GB for a 40 vCore provisioned compute.
In the vCore model, the supported maximum database size may differ depending on hardware. For large
databases, check supported maximum sizes in the vCore model for single databases and elastic pools.
For elastic pools, the DTU and vCore models have differences in the maximum supported number of
databases per pool. This should be considered when migrating elastic pools with many databases.
Some hardware configurations may not be available in every region. Check availability under Hardware
configuration for SQL Database.

IMPORTANT
The DTU to vCore sizing guidelines above are provided to help in the initial estimation of the target database service
objective.
The optimal configuration of the target database is workload-dependent. Thus, to achieve the optimal price/performance
ratio after migration, you may need to leverage the flexibility of the vCore model to adjust the number of vCores,
hardware configuration, and service and compute tiers. You may also need to adjust database configuration parameters,
such as maximum degree of parallelism, and/or change the database compatibility level to enable recent improvements in
the database engine.

DTU to vCore migration examples

NOTE
The values in the examples below are for illustration purposes only. Actual values returned in described scenarios may be
different.

Migrating a Standard S9 database


The mapping query returns the following result (some columns not shown for brevity):
DT U_M EM O R GEN 4_M EM O GEN 5_M EM O
DT U_LO GIC A L DT U_H A RDW Y _P ER_C O RE_ GEN 4_VC O RE RY _P ER_C O RE GEN 5_VC O RE RY _P ER_C O RE
_C P US A RE_GEN GB S _GB S _GB

24.00 Gen5 5.40 16.800 7 24.000 5.05

We see that the DTU database has 24 logical CPUs (vCores), with 5.4 GB of memory per vCore, and is using
Gen5 hardware. The direct match to that is a General Purpose 24 vCore database on Gen5 hardware, i.e. the
GP_Gen5_24 vCore service objective.
Migrating a Standard S0 database
The mapping query returns the following result (some columns not shown for brevity):

DT U_M EM O R GEN 4_M EM O GEN 5_M EM O


DT U_LO GIC A L DT U_H A RDW Y _P ER_C O RE_ GEN 4_VC O RE RY _P ER_C O RE GEN 5_VC O RE RY _P ER_C O RE
_C P US A RE_GEN GB S _GB S _GB

0.25 Gen4 0.42 0.250 7 0.425 5.05

We see that the DTU database has the equivalent of 0.25 logical CPUs (vCores), with 0.42 GB of memory per
vCore, and is using Gen4 hardware. The smallest vCore service objectives in the Gen4 and Gen5 hardware
configurations, GP_Gen4_1 and GP_Gen5_2 , provide more compute resources than the Standard S0 database,
so a direct match is not possible. Since Gen4 hardware is being decommissioned, the GP_Gen5_2 option is
preferred. Additionally, if the workload is well-suited for the Serverless compute tier, then GP_S_Gen5_1 would
be a closer match.
Migrating a Premium P15 database
The mapping query returns the following result (some columns not shown for brevity):

DT U_M EM O R GEN 4_M EM O GEN 5_M EM O


DT U_LO GIC A L DT U_H A RDW Y _P ER_C O RE_ GEN 4_VC O RE RY _P ER_C O RE GEN 5_VC O RE RY _P ER_C O RE
_C P US A RE_GEN GB S _GB S _GB

42.00 Gen5 4.86 29.400 7 42.000 5.05

We see that the DTU database has 42 logical CPUs (vCores), with 4.86 GB of memory per vCore, and is using
Gen5 hardware. While there is not a vCore service objective with 42 cores, the BC_Gen5_40 service objective is
very close both in terms of CPU and memory capacity, and is a good match.
Migrating a Basic 200 eDTU elastic pool
The mapping query returns the following result (some columns not shown for brevity):

DT U_M EM O R GEN 4_M EM O GEN 5_M EM O


DT U_LO GIC A L DT U_H A RDW Y _P ER_C O RE_ GEN 4_VC O RE RY _P ER_C O RE GEN 5_VC O RE RY _P ER_C O RE
_C P US A RE_GEN GB S _GB S _GB

4.00 Gen5 5.40 2.800 7 4.000 5.05

We see that the DTU elastic pool has 4 logical CPUs (vCores), with 5.4 GB of memory per vCore, and is using
Gen5 hardware. The direct match in the vCore model is a GP_Gen5_4 elastic pool. However, this service
objective supports a maximum of 200 databases per pool, while the Basic 200 eDTU elastic pool supports up to
500 databases. If the elastic pool to be migrated has more than 200 databases, the matching vCore service
objective would have to be GP_Gen5_6 , which supports up to 500 databases.
Migrate geo-replicated databases
Migrating from the DTU-based model to the vCore-based purchasing model is similar to upgrading or
downgrading the geo-replication relationships between databases in the standard and premium service tiers.
During migration, you don't have to stop geo-replication, but you must follow these sequencing rules:
When upgrading, you must upgrade the secondary database first, and then upgrade the primary.
When downgrading, reverse the order: you must downgrade the primary database first, and then
downgrade the secondary.
When you're using geo-replication between two elastic pools, we recommend that you designate one pool as
the primary and the other as the secondary. In that case, when you're migrating elastic pools you should use the
same sequencing guidance. However, if you have elastic pools that contain both primary and secondary
databases, treat the pool with the higher utilization as the primary and follow the sequencing rules accordingly.
The following table provides guidance for specific migration scenarios:

C URREN T SERVIC E T IER TA RGET SERVIC E T IER M IGRAT IO N T Y P E USER A C T IO N S

Standard General purpose Lateral Can migrate in any order,


but need to ensure
appropriate vCore sizing as
described above

Premium Business Critical Lateral Can migrate in any order,


but need to ensure
appropriate vCore sizing as
described above

Standard Business Critical Upgrade Must migrate secondary


first

Business Critical Standard Downgrade Must migrate primary first

Premium General purpose Downgrade Must migrate primary first

General purpose Premium Upgrade Must migrate secondary


first

Business Critical General purpose Downgrade Must migrate primary first

General purpose Business Critical Upgrade Must migrate secondary


first

Migrate failover groups


Migration of failover groups with multiple databases requires individual migration of the primary and
secondary databases. During that process, the same considerations and sequencing rules apply. After the
databases are converted to the vCore-based purchasing model, the failover group will remain in effect with the
same policy settings.
Create a geo -replication secondary database
You can create a geo-replication secondary database (a geo-secondary) only by using the same service tier as
you used for the primary database. For databases with a high log-generation rate, we recommend creating the
geo-secondary with the same compute size as the primary.
If you're creating a geo-secondary in the elastic pool for a single primary database, make sure the maxVCore
setting for the pool matches the primary database's compute size. If you're creating a geo-secondary for a
primary in another elastic pool, we recommend that the pools have the same maxVCore settings.

Use database copy to migrate from DTU to vCore


You can copy any database with a DTU-based compute size to a database with a vCore-based compute size
without restrictions or special sequencing as long as the target compute size supports the maximum database
size of the source database. Database copy creates a transactionally consistent snapshot of the data as of a point
in time after the copy operation starts. It doesn't synchronize data between the source and the target after that
point in time.

Next steps
For the specific compute sizes and storage size choices available for single databases, see SQL Database
vCore-based resource limits for single databases.
For the specific compute sizes and storage size choices available for elastic pools, see SQL Database vCore-
based resource limits for elastic pools.
Scale single database resources in Azure SQL
Database
7/12/2022 • 11 minutes to read • Edit Online

This article describes how to scale the compute and storage resources available for an Azure SQL Database in
the provisioned compute tier. Alternatively, the serverless compute tier provides compute autoscaling and bills
per second for compute used.
After initially picking the number of vCores or DTUs, you can scale a single database up or down dynamically
based on actual experience using:
Transact-SQL
Azure portal
PowerShell
Azure CLI
REST API

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Impact
Changing the service tier or compute size of mainly involves the service performing the following steps:
1. Create a new compute instance for the database.
A new compute instance is created with the requested service tier and compute size. For some
combinations of service tier and compute size changes, a replica of the database must be created in the
new compute instance, which involves copying data and can strongly influence the overall latency.
Regardless, the database remains online during this step, and connections continue to be directed to the
database in the original compute instance.
2. Switch routing of connections to a new compute instance.
Existing connections to the database in the original compute instance are dropped. Any new connections
are established to the database in the new compute instance. For some combinations of service tier and
compute size changes, database files are detached and reattached during the switch. Regardless, the
switch can result in a brief service interruption when the database is unavailable generally for less than
30 seconds and often for only a few seconds. If there are long-running transactions running when
connections are dropped, the duration of this step may take longer in order to recover aborted
transactions. Accelerated Database Recovery can reduce the impact from aborting long running
transactions.

IMPORTANT
No data is lost during any step in the workflow. Make sure that you have implemented some retry logic in the
applications and components that are using Azure SQL Database while the service tier is changed.
Latency
The estimated latency to change the service tier, scale the compute size of a single database or elastic pool,
move a database in/out of an elastic pool, or move a database between elastic pools is parameterized as follows:

B A SIC EL A ST IC
POOL,
STA N DA RD ( S2- S12) , P REM IUM O R
B A SIC SIN GL E GEN ERA L P URP O SE B USIN ESS C RIT IC A L
DATA B A SE, SIN GL E DATA B A SE SIN GL E DATA B A SE
SERVIC E T IER STA N DA RD ( S0- S1) O R EL A ST IC P O O L O R EL A ST IC P O O L H Y P ERSC A L E

Basic single • Constant time • Latency • Latency • Latency


database, latency independent proportional to proportional to proportional to
Standard (S0-S1) of space used database space used database space used database space used
• Typically, less than due to data copying due to data copying due to data copying
5 minutes • Typically, less than • Typically, less than • Typically, less than
1 minute per GB of 1 minute per GB of 1 minute per GB of
space used space used space used

Basic elastic pool, • Latency • For single • Latency • Latency


Standard (S2-S12), proportional to databases, constant proportional to proportional to
General Purpose database space used time latency database space used database space used
single database or due to data copying independent of space due to data copying due to data copying
elastic pool • Typically, less than used • Typically, less than • Typically, less than
1 minute per GB of • Typically, less than 1 minute per GB of 1 minute per GB of
space used 5 minutes for single space used space used
databases
• For elastic pools,
proportional to the
number of databases

Premium or • Latency • Latency • Latency • Latency


Business Critical proportional to proportional to proportional to proportional to
single database or database space used database space used database space used database space used
elastic pool due to data copying due to data copying due to data copying due to data copying
• Typically, less than • Typically, less than • Typically, less than • Typically, less than
1 minute per GB of 1 minute per GB of 1 minute per GB of 1 minute per GB of
space used space used space used space used

Hyperscale N/A N/A N/A • Constant time


latency independent
of space used
• Typically, less than
2 minutes

NOTE
Additionally, for Standard (S2-S12) and General Purpose databases, latency for moving a database in/out of an elastic pool
or between elastic pools will be proportional to database size if the database is using Premium File Share (PFS) storage.
To determine if a database is using PFS storage, execute the following query in the context of the database. If the value in
the AccountType column is PremiumFileStorage or PremiumFileStorage-ZRS , the database is using PFS storage.
SELECT s.file_id,
s.type_desc,
s.name,
FILEPROPERTYEX(s.name, 'AccountType') AS AccountType
FROM sys.database_files AS s
WHERE s.type_desc IN ('ROWS', 'LOG');

NOTE
The zone redundant property will remain the same by default when scaling from the Business Critical to the General
Purpose tier. Latency for this downgrade when zone redundancy is enabled as well as latency for switching to zone
redundancy for the General Purpose tier will be proportional to database size.

TIP
To monitor in-progress operations, see: Manage operations using the SQL REST API, Manage operations using CLI,
Monitor operations using T-SQL and these two PowerShell commands: Get-AzSqlDatabaseActivity and Stop-
AzSqlDatabaseActivity.

Cancelling changes
A service tier change or compute rescaling operation can be canceled.
The Azure portal
In the database overview blade, navigate to Notifications and click on the tile indicating there's an ongoing
operation:

Next, click on the button labeled Cancel this operation .

PowerShell
From a PowerShell command prompt, set the $resourceGroupName , $serverName , and $databaseName , and then
run the following command:
$operationName = (az sql db op list --resource-group $resourceGroupName --server $serverName --database
$databaseName --query "[?state=='InProgress'].name" --out tsv)
if (-not [string]::IsNullOrEmpty($operationName)) {
(az sql db op cancel --resource-group $resourceGroupName --server $serverName --database $databaseName -
-name $operationName)
"Operation " + $operationName + " has been canceled"
}
else {
"No service tier change or compute rescaling operation found"
}

Additional considerations
If you're upgrading to a higher service tier or compute size, the database max size doesn't increase unless
you explicitly specify a larger size (maxsize).
To downgrade a database, the database used space must be smaller than the maximum allowed size of the
target service tier and compute size.
When downgrading from Premium to the Standard tier, an extra storage cost applies if both (1) the max
size of the database is supported in the target compute size, and (2) the max size exceeds the included
storage amount of the target compute size. For example, if a P1 database with a max size of 500 GB is
downsized to S3, then an extra storage cost applies since S3 supports a max size of 1 TB and its included
storage amount is only 250 GB. So, the extra storage amount is 500 GB – 250 GB = 250 GB. For pricing of
extra storage, see Azure SQL Database pricing. If the actual amount of space used is less than the included
storage amount, then this extra cost can be avoided by reducing the database max size to the included
amount.
When upgrading a database with geo-replication enabled, upgrade its secondary databases to the desired
service tier and compute size before upgrading the primary database (general guidance for best
performance). When upgrading to a different edition, it's a requirement that the secondary database is
upgraded first.
When downgrading a database with geo-replication enabled, downgrade its primary databases to the
desired service tier and compute size before downgrading the secondary database (general guidance for
best performance). When downgrading to a different edition, it's a requirement that the primary database is
downgraded first.
The restore service offerings are different for the various service tiers. If you're downgrading to the Basic tier,
there's a lower backup retention period. See Azure SQL Database Backups.
The new properties for the database aren't applied until the changes are complete.
When data copying is required to scale a database (see Latency) when changing the service tier, high
resource utilization concurrent to the scaling operation may cause longer scaling times. With Accelerated
Database Recovery (ADR), rollback of long running transactions is not a significant source of delay, but high
concurrent resource usage may leave less compute, storage, and network bandwidth resources for scaling,
particularly for smaller compute sizes.

Billing
You're billed for each hour a database exists using the highest service tier + compute size that applied during
that hour, regardless of usage or whether the database was active for less than an hour. For example, if you
create a single database and delete it five minutes later your bill reflects a charge for one database hour.

Change storage size


vCore -based purchasing model
Storage can be provisioned up to the data storage max size limit using 1-GB increments. The minimum
configurable data storage is 1 GB. For data storage max size limits in each service objective, see resource
limit documentation pages for Resource limits for single databases using the vCore purchasing model and
Resource limits for single databases using the DTU purchasing model.
Data storage for a single database can be provisioned by increasing or decreasing its max size using the
Azure portal, Transact-SQL, PowerShell, Azure CLI, or REST API. If the max size value is specified in bytes, it
must be a multiple of 1 GB (1073741824 bytes).
The amount of data that can be stored in the data files of a database is limited by the configured data storage
max size. In addition to that storage, Azure SQL Database automatically allocates 30% more storage to be
used for the transaction log.
Azure SQL Database automatically allocates 32 GB per vCore for the tempdb database. tempdb is located on
the local SSD storage in all service tiers.
The price of storage for a single database or an elastic pool is the sum of data storage and transaction log
storage amounts multiplied by the storage unit price of the service tier. The cost of tempdb is included in the
price. For details on storage price, see Azure SQL Database pricing.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

DTU -based purchasing model


The DTU price for a single database includes a certain amount of storage at no additional cost. Extra storage
beyond the included amount can be provisioned for an additional cost up to the max size limit in increments
of 250 GB up to 1 TB, and then in increments of 256 GB beyond 1 TB. For included storage amounts and max
size limits, see Single database: Storage sizes and compute sizes.
Extra storage for a single database can be provisioned by increasing its max size using the Azure portal,
Transact-SQL, PowerShell, the Azure CLI, or the REST API.
The price of extra storage for a single database is the extra storage amount multiplied by the extra storage
unit price of the service tier. For details on the price of extra storage, see Azure SQL Database pricing.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Geo -replicated database


To change the database size of a replicated secondary database, change the size of the primary database. This
change will then be replicated and implemented on the secondary database as well.

P11 and P15 constraints when max size greater than 1 TB


More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China
North, Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is
limited to 1 TB. The following considerations and limitations apply to P11 and P15 databases with a maximum
size greater than 1 TB:
If the max size for a P11 or P15 database was ever set to a value greater than 1 TB, then can it only be
restored or copied to a P11 or P15 database. Subsequently, the database can be rescaled to a different
compute size provided the amount of space allocated at the time of the rescaling operation doesn't exceed
max size limits of the new compute size.
For active geo-replication scenarios:
Setting up a geo-replication relationship: If the primary database is P11 or P15, the secondary(ies)
must also be P11 or P15. Lower compute size are rejected as secondaries since they aren't capable of
supporting more than 1 TB.
Upgrading the primary database in a geo-replication relationship: Changing the maximum size to
more than 1 TB on a primary database triggers the same change on the secondary database. Both
upgrades must be successful for the change on the primary to take effect. Region limitations for the
more than 1-TB option apply. If the secondary is in a region that doesn't support more than 1 TB, the
primary isn't upgraded.
Using the Import/Export service for loading P11/P15 databases with more than 1 TB isn't supported. Use
SqlPackage.exe to import and export data.

Next steps
For overall resource limits, see Azure SQL Database vCore-based resource limits - single databases and Azure
SQL Database DTU-based resource limits - single databases.
Scale elastic pool resources in Azure SQL Database
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes how to scale the compute and storage resources available for elastic pools and pooled
databases in Azure SQL Database.

Change compute resources (vCores or DTUs)


After initially picking the number of vCores or eDTUs, you can scale an elastic pool up or down dynamically
based on actual experience using the using:
Transact-SQL
Azure portal
PowerShell
Azure CLI
REST API
Impact of changing service tier or rescaling compute size
Changing the service tier or compute size of an elastic pool follows a similar pattern as for single databases and
mainly involves the service performing the following steps:
1. Create new compute instance for the elastic pool
A new compute instance for the elastic pool is created with the requested service tier and compute size.
For some combinations of service tier and compute size changes, a replica of each database must be
created in the new compute instance which involves copying data and can strongly influence the overall
latency. Regardless, the databases remain online during this step, and connections continue to be directed
to the databases in the original compute instance.
2. Switch routing of connections to new compute instance
Existing connections to the databases in the original compute instance are dropped. Any new connections
are established to the databases in the new compute instance. For some combinations of service tier and
compute size changes, database files are detached and reattached during the switch. Regardless, the
switch can result in a brief service interruption when databases are unavailable generally for less than 30
seconds and often for only a few seconds. If there are long running transactions running when
connections are dropped, the duration of this step may take longer in order to recover aborted
transactions. Accelerated Database Recovery can reduce the impact from aborting long running
transactions.

IMPORTANT
No data is lost during any step in the workflow.

Latency of changing service tier or rescaling compute size


The estimated latency to change the service tier, scale the compute size of a single database or elastic pool,
move a database in/out of an elastic pool, or move a database between elastic pools is parameterized as follows:
B A SIC EL A ST IC
POOL,
STA N DA RD ( S2- S12) , P REM IUM O R
B A SIC SIN GL E GEN ERA L P URP O SE B USIN ESS C RIT IC A L
DATA B A SE, SIN GL E DATA B A SE SIN GL E DATA B A SE
SERVIC E T IER STA N DA RD ( S0- S1) O R EL A ST IC P O O L O R EL A ST IC P O O L H Y P ERSC A L E

Basic single • Constant time • Latency • Latency • Latency


database, latency independent proportional to proportional to proportional to
Standard (S0-S1) of space used database space used database space used database space used
• Typically, less than due to data copying due to data copying due to data copying
5 minutes • Typically, less than • Typically, less than • Typically, less than
1 minute per GB of 1 minute per GB of 1 minute per GB of
space used space used space used

Basic elastic pool, • Latency • For single • Latency • Latency


Standard (S2-S12), proportional to databases, constant proportional to proportional to
General Purpose database space used time latency database space used database space used
single database or due to data copying independent of space due to data copying due to data copying
elastic pool • Typically, less than used • Typically, less than • Typically, less than
1 minute per GB of • Typically, less than 1 minute per GB of 1 minute per GB of
space used 5 minutes for single space used space used
databases
• For elastic pools,
proportional to the
number of databases

Premium or • Latency • Latency • Latency • Latency


Business Critical proportional to proportional to proportional to proportional to
single database or database space used database space used database space used database space used
elastic pool due to data copying due to data copying due to data copying due to data copying
• Typically, less than • Typically, less than • Typically, less than • Typically, less than
1 minute per GB of 1 minute per GB of 1 minute per GB of 1 minute per GB of
space used space used space used space used

Hyperscale N/A N/A N/A • Constant time


latency independent
of space used
• Typically, less than
2 minutes

NOTE
In the case of changing the service tier or rescaling compute for an elastic pool, the summation of space used across
all databases in the pool should be used to calculate the estimate.
In the case of moving a database to/from an elastic pool, only the space used by the database impacts the latency, not
the space used by the elastic pool.
For Standard and General Purpose elastic pools, latency of moving a database in/out of an elastic pool or between
elastic pools will be proportional to database size if the elastic pool is using Premium File Share (PFS) storage. To
determine if a pool is using PFS storage, execute the following query in the context of any database in the pool. If the
value in the AccountType column is PremiumFileStorage or PremiumFileStorage-ZRS , the pool is using PFS
storage.
SELECT s.file_id,
s.type_desc,
s.name,
FILEPROPERTYEX(s.name, 'AccountType') AS AccountType
FROM sys.database_files AS s
WHERE s.type_desc IN ('ROWS', 'LOG');

TIP
To monitor in-progress operations, see: Manage operations using the SQL REST API, Manage operations using CLI,
Monitor operations using T-SQL and these two PowerShell commands: Get-AzSqlDatabaseActivity and Stop-
AzSqlDatabaseActivity.

Additional considerations when changing service tier or rescaling compute size


When downsizing vCores or eDTUs for an elastic pool, the pool used space must be smaller than the
maximum allowed size of the target service tier and pool eDTUs.
When rescaling eDTUs for an elastic pool, an extra storage cost applies if (1) the storage max size of the pool
is supported by the target pool, and (2) the storage max size exceeds the included storage amount of the
target pool. For example, if a 100 eDTU Standard pool with a max size of 100 GB is downsized to a 50 eDTU
Standard pool, then an extra storage cost applies since target pool supports a max size of 100 GB and its
included storage amount is only 50 GB. So, the extra storage amount is 100 GB – 50 GB = 50 GB. For pricing
of extra storage, see SQL Database pricing. If the actual amount of space used is less than the included
storage amount, then this extra cost can be avoided by reducing the database max size to the included
amount.
Billing during rescaling
You are billed for each hour a database exists using the highest service tier + compute size that applied during
that hour, regardless of usage or whether the database was active for less than an hour. For example, if you
create a single database and delete it five minutes later your bill reflects a charge for one database hour.

Change elastic pool storage size


IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

vCore -based purchasing model


Storage can be provisioned up to the max size limit:
For storage in the Standard or General Purpose service tiers, increase or decrease size in 10-GB
increments
For storage in the Premium or Business Critical service tiers, increase or decrease size in 250-GB
increments
Storage for an elastic pool can be provisioned by increasing or decreasing its max size.
The price of storage for an elastic pool is the storage amount multiplied by the storage unit price of the
service tier. For details on the price of extra storage, see SQL Database pricing.
IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

DTU -based purchasing model


The eDTU price for an elastic pool includes a certain amount of storage at no additional cost. Extra storage
beyond the included amount can be provisioned for an additional cost up to the max size limit in increments
of 250 GB up to 1 TB, and then in increments of 256 GB beyond 1 TB. For included storage amounts and max
size limits, see Resources limits for elastic pools using the DTU purchasing model or Resource limits for
elastic pools using the vCore purchasing model.
Extra storage for an elastic pool can be provisioned by increasing its max size using the Azure portal,
PowerShell, the Azure CLI, or the REST API.
The price of extra storage for an elastic pool is the extra storage amount multiplied by the extra storage unit
price of the service tier. For details on the price of extra storage, see SQL Database pricing.

IMPORTANT
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Next steps
For overall resource limits, see SQL Database vCore-based resource limits - elastic pools and SQL Database
DTU-based resource limits - elastic pools.
Manage elastic pools in Azure SQL Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


With an elastic pool, you determine the amount of resources that the elastic pool requires to handle the
workload of its databases, and the amount of resources for each pooled database.

Azure portal
All pool settings can be found in one place: the Configure pool blade. To get here, find an elastic pool in the
Azure portal and click Configure pool either from the top of the blade or from the resource menu on the left.
From here you can make any combination of the following changes and save them all in one batch:
1. Change the service tier of the pool
2. Scale the performance (DTU or vCores) and storage up or down
3. Add or remove databases to/from the pool
4. Set a min (guaranteed) and max performance limit for the databases in the pools
5. Review the cost summary to view any changes to your bill as a result of your new selections

PowerShell
NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

To create and manage SQL Database elastic pools and pooled databases with Azure PowerShell, use the
following PowerShell cmdlets. If you need to install or upgrade PowerShell, see Install Azure PowerShell module.
To create and manage the servers for an elastic pool, see Create and manage servers. To create and manage
firewall rules, see Create and manage firewall rules using PowerShell.

TIP
For PowerShell example scripts, see Create elastic pools and move databases between pools and out of a pool using
PowerShell and Use PowerShell to monitor and scale a SQL elastic pool in Azure SQL Database.

C M DL ET DESC RIP T IO N

New-AzSqlElasticPool Creates an elastic pool.

Get-AzSqlElasticPool Gets elastic pools and their property values.

Set-AzSqlElasticPool Modifies properties of an elastic pool For example, use the


StorageMB property to modify the max storage of an
elastic pool.

Remove-AzSqlElasticPool Deletes an elastic pool.

Get-AzSqlElasticPoolActivity Gets the status of operations on an elastic pool

New-AzSqlDatabase Creates a new database in an existing pool or as a single


database.

Get-AzSqlDatabase Gets one or more databases.

Set-AzSqlDatabase Sets properties for a database, or moves an existing


database into, out of, or between elastic pools.

Remove-AzSqlDatabase Removes a database.

TIP
Creation of many databases in an elastic pool can take time when done using the portal or PowerShell cmdlets that create
only a single database at a time. To automate creation into an elastic pool, see CreateOrUpdateElasticPoolAndPopulate.

Azure CLI
To create and manage SQL Database elastic pools with Azure CLI, use the following Azure CLI SQL Database
commands. Use the Cloud Shell to run Azure CLI in your browser, or install it on macOS, Linux, or Windows.
TIP
For Azure CLI example scripts, see Use CLI to move a database in SQL Database in a SQL elastic pool and Use Azure CLI
to scale a SQL elastic pool in Azure SQL Database.

C M DL ET DESC RIP T IO N

az sql elastic-pool create Creates an elastic pool.

az sql elastic-pool list Returns a list of elastic pools in a server.

az sql elastic-pool list-dbs Returns a list of databases in an elastic pool.

az sql elastic-pool list-editions Also includes available pool DTU settings, storage limits, and
per database settings. In order to reduce verbosity,
additional storage limits and per database settings are
hidden by default.

az sql elastic-pool update Updates an elastic pool.

az sql elastic-pool delete Deletes the elastic pool.

Transact-SQL (T-SQL)
To create and move databases within existing elastic pools or to return information about an SQL Database
elastic pool with Transact-SQL, use the following T-SQL commands. You can issue these commands using the
Azure portal, SQL Server Management Studio, Visual Studio Code, or any other program that can connect to a
server and pass Transact-SQL commands. To create and manage firewall rules using T-SQL, see Manage firewall
rules using Transact-SQL.

IMPORTANT
You cannot create, update, or delete an Azure SQL Database elastic pool using Transact-SQL. You can add or remove
databases from an elastic pool, and you can use DMVs to return information about existing elastic pools.

C OMMAND DESC RIP T IO N

CREATE DATABASE (Azure SQL Database) Creates a new database in an existing pool or as a single
database. You must be connected to the master database to
create a new database.

ALTER DATABASE (Azure SQL Database) Move a database into, out of, or between elastic pools.

DROP DATABASE (Transact-SQL) Deletes a database.

sys.elastic_pool_resource_stats (Azure SQL Database) Returns resource usage statistics for all the elastic pools on a
server. For each elastic pool, there is one row for each 15
second reporting window (four rows per minute). This
includes CPU, IO, Log, storage consumption and concurrent
request/session utilization by all databases in the pool.
C OMMAND DESC RIP T IO N

sys.database_service_objectives (Azure SQL Database) Returns the edition (service tier), service objective (pricing
tier), and elastic pool name, if any, for a database in SQL
Database or Azure Synapse Analytics. If logged on to the
master database in a server, returns information on all
databases. For Azure Synapse Analytics, you must be
connected to the master database.

REST API
To create and manage SQL Database elastic pools and pooled databases, use these REST API requests.

C OMMAND DESC RIP T IO N

Elastic pools - Create or update Creates a new elastic pool or updates an existing elastic pool.

Elastic pools - Delete Deletes the elastic pool.

Elastic pools - Get Gets an elastic pool.

Elastic pools - List by server Returns a list of elastic pools in a server.

Elastic pools - Update Updates an existing elastic pool.

Elastic pool activities Returns elastic pool activities.

Elastic pool database activities Returns activity on databases inside of an elastic pool.

Databases - Create or update Creates a new database or updates an existing database.

Databases - Get Gets a database.

Databases - List by elastic pool Returns a list of databases in an elastic pool.

Databases - List by server Returns a list of databases in a server.

Databases - Update Updates an existing database.

Next steps
To learn more about design patterns for SaaS applications using elastic pools, see Design Patterns for Multi-
tenant SaaS Applications with Azure SQL Database.
For a SaaS tutorial using elastic pools, see Introduction to the Wingtip SaaS application.
Resource management in dense elastic pools
7/12/2022 • 15 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database elastic pools is a cost-effective solution for managing many databases with varying
resource usage. All databases in an elastic pool share the same allocation of resources, such as CPU, memory,
worker threads, storage space, tempdb , on the assumption that only a subset of databases in the pool will
use compute resources at any given time . This assumption allows elastic pools to be cost-effective. Instead
of paying for all resources each individual database could potentially need, customers pay for a much smaller
set of resources, shared among all databases in the pool.

Resource governance
Resource sharing requires the system to carefully control resource usage to minimize the "noisy neighbor"
effect, where a database with high resource consumption affects other databases in the same elastic pool. Azure
SQL Database achieves these goals by implementing resource governance. At the same time, the system must
provide sufficient resources for features such as high availability and disaster recovery (HADR), backup and
restore, monitoring, Query Store, Automatic tuning, etc. to function reliably.
The primary design goal of elastic pools is to be cost-effective. For this reason, the system intentionally allows
customers to create dense pools, that is pools with the number of databases approaching or at the maximum
allowed, but with a moderate allocation of compute resources. For the same reason, the system doesn't reserve
all potentially needed resources for its internal processes, but allows resource sharing between internal
processes and user workloads.
This approach allows customers to use dense elastic pools to achieve adequate performance and major cost
savings. However, if the workload against many databases in a dense pool is sufficiently intense, resource
contention becomes significant. Resource contention reduces user workload performance, and can negatively
impact internal processes.

IMPORTANT
In dense pools with many active databases, it may not be feasible to increase the number of databases in the pool up to
the maximums documented for DTU and vCore elastic pools.
The number of databases that can be placed in dense pools without causing resource contention and performance
problems depends on the number of concurrently active databases, and on resource consumption by user workloads in
each database. This number can change over time as user workloads change.
Additionally, if the min vCores per database, or min DTUs per database setting is set to a value greater than 0, the
maximum number of databases in the pool will be implicitly limited. For more information, see Database properties for
pooled vCore databases and Database properties for pooled DTU databases.

When resource contention occurs in a densely packed pool, customers can choose one or more of the following
actions to mitigate it:
Tune query workload to reduce resource consumption, or spread resource consumption across multiple
databases over time.
Reduce pool density by moving some databases to another pool, or by making them standalone databases.
Scale up the pool to get more resources.
For suggestions on how to implement the last two actions, see Operational recommendations later in this
article. Reducing resource contention benefits both user workloads and internal processes, and lets the system
reliably maintain expected level of service.

Monitoring resource consumption


To avoid performance degradation due to resource contention, customers using dense elastic pools should
proactively monitor resource consumption, and take timely action if increasing resource contention starts
affecting workloads. Continuous monitoring is important because resource usage in a pool changes over time,
due to changes in user workload, changes in data volumes and distribution, changes in pool density, and
changes in the Azure SQL Database service.
Azure SQL Database provides several metrics that are relevant for this type of monitoring. Exceeding the
recommended average value for each metric indicates resource contention in the pool, and should be addressed
using one of the actions mentioned earlier.
To send an alert when pool resource utilization (CPU, data IO, log IO, workers, etc.) exceeds a threshold, consider
creating alerts via the Azure portal or the Add-AzMetricAlertRulev2 PowerShell cmdlet. When monitoring elastic
pools, consider also creating alerts for individual databases in the pool if needed in your scenario. For a sample
scenario of monitoring elastic pools, see Monitor and manage performance of Azure SQL Database in a multi-
tenant SaaS app.

M ET RIC N A M E DESC RIP T IO N REC O M M EN DED AVERA GE VA L UE

avg_instance_cpu_percent CPU utilization of the SQL process Below 70%. Occasional short spikes up
associated with an elastic pool, as to 90% may be acceptable.
measured by the underlying operating
system. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
sqlserver_process_core_percent ,
and can be viewed in Azure portal. This
value is the same for every database in
the same elastic pool.

max_worker_percent Worker thread utilization. Provided for Below 80%. Spikes up to 100% will
each database in the pool, as well as cause connection attempts and queries
for the pool itself. There are different to fail.
limits on the number of worker
threads at the database level, and at
the pool level, therefore monitoring
this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named workers_percent , and
can be viewed in Azure portal.
M ET RIC N A M E DESC RIP T IO N REC O M M EN DED AVERA GE VA L UE

avg_data_io_percent IOPS utilization for read and write Below 80%. Occasional short spikes up
physical IO. Provided for each database to 100% may be acceptable.
in the pool, as well as for the pool
itself. There are different limits on the
number of IOPS at the database level,
and at the pool level, therefore
monitoring this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
physical_data_read_percent , and
can be viewed in Azure portal.

avg_log_write_percent Throughput utilizations for transaction Below 90%. Occasional short spikes up
log write IO. Provided for each to 100% may be acceptable.
database in the pool, as well as for the
pool itself. There are different limits on
the log throughput at the database
level, and at the pool level, therefore
monitoring this metric at both levels is
recommended. Available in the
sys.dm_db_resource_stats view in
every database, and in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named log_write_percent , and
can be viewed in Azure portal. When
this metric is close to 100%, all
database modifications (INSERT,
UPDATE, DELETE, MERGE statements,
SELECT … INTO, BULK INSERT, etc.) will
be slower.

oom_per_second The rate of out-of-memory (OOM) 0


errors in an elastic pool, which is an
indicator of memory pressure.
Available in the
sys.dm_resource_governor_resource_p
ools_history_ex view. See Examples for
a sample query to calculate this metric.
For more information, see resource
limits for elastic pools using DTUs or
elastic pools using vCores, and
Troubleshoot out of memory errors
with Azure SQL Database. If you
encounter out of memory errors,
review
sys.dm_os_out_of_memory_events.
M ET RIC N A M E DESC RIP T IO N REC O M M EN DED AVERA GE VA L UE

avg_storage_percent Total storage space used by data in all Below 80%. Can approach 100% for
databases within an elastic pool. Does pools with no data growth.
not include empty space in database
files. Available in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named storage_percent , and
can be viewed in Azure portal.

avg_allocated_storage_percent Total storage space used by database Below 90%. Can approach 100% for
files in storage in all databases within pools with no data growth.
an elastic pool. Includes empty space
in database files. Available in the
sys.elastic_pool_resource_stats view in
the master database. This metric is
also emitted to Azure Monitor, where
it is named
allocated_data_storage_percent ,
and can be viewed in Azure portal.

tempdb_log_used_percent Transaction log space utilization in the Below 50%. Occasional spikes up to
tempdb database. Even though 80% are acceptable.
temporary objects created in one
database are not visible in other
databases in the same elastic pool,
tempdb is a shared resource for all
databases in the same pool. A long
running or orphaned transaction in
tempdb started from one database in
the pool can consume a large portion
of transaction log, and cause failures
for queries in other databases in the
same pool. Derived from
sys.dm_db_log_space_usage and
sys.database_files views. This metric is
also emitted to Azure Monitor, and can
be viewed in Azure portal. See
Examples for a sample query to return
the current value of this metric.

In addition to these metrics, Azure SQL Database provides a view that returns actual resource governance limits,
as well as additional views that return resource utilization statistics at the resource pool level, and at the
workload group level.

VIEW N A M E DESC RIP T IO N

sys.dm_user_db_resource_governance Returns actual configuration and capacity settings used by


resource governance mechanisms in the current database or
elastic pool.

sys.dm_resource_governor_resource_pools Returns information about the current resource pool state,


the current configuration of resource pools, and cumulative
resource pool statistics.
VIEW N A M E DESC RIP T IO N

sys.dm_resource_governor_workload_groups Returns cumulative workload group statistics and the current


configuration of the workload group. This view can be joined
with sys.dm_resource_governor_resource_pools on the
pool_id column to get resource pool information.

sys.dm_resource_governor_resource_pools_history_ex Returns resource pool utilization statistics for recent history,


based on the number of snapshots available. Each row
represents a time interval. The duration of the interval is
provided in the duration_ms column. The delta_
columns return the change in each statistic during the
interval.

sys.dm_resource_governor_workload_groups_history_ex Returns workload group utilization statistics for recent


history, based on the number of snapshots available. Each
row represents a time interval. The duration of the interval is
provided in the duration_ms column. The delta_
columns return the change in each statistic during the
interval.

TIP
To query these and other dynamic management views using a principal other than server administrator, add this principal
to the ##MS_ServerStateReader## server role.

These views can be used to monitor resource utilization and troubleshoot resource contention in near real-time.
User workload on the primary and readable secondary replicas, including geo-replicas, is classified into the
SloSharedPool1 resource pool and UserPrimaryGroup.DBId[N] workload group, where N stands for the
database ID value.
In addition to monitoring current resource utilization, customers using dense pools can maintain historical
resource utilization data in a separate data store. This data can be used in predictive analysis to proactively
manage resource utilization based on historical and seasonal trends.

Operational recommendations
Leave sufficient resource headroom . If resource contention and performance degradation occur, mitigation
may involve moving some databases out of the affected elastic pool, or scaling up the pool, as noted earlier.
However, these actions require additional compute resources to complete. In particular, for Premium and
Business Critical pools, these actions require transferring all data for the databases being moved, or for all
databases in the elastic pool if the pool is scaled up. Data transfer is a long running and resource-intensive
operation. If the pool is already under high resource pressure, the mitigating operation itself will degrade
performance even further. In extreme cases, it may not be possible to solve resource contention via database
move or pool scale-up because the required resources are not available. In this case, temporarily reducing query
workload on the affected elastic pool may be the only solution.
Customers using dense pools should closely monitor resource utilization trends as described earlier, and take
mitigating action while metrics remain within the recommended ranges and there are still sufficient resources in
the elastic pool.
Resource utilization depends on multiple factors that change over time for each database and each elastic pool.
Achieving optimal price/performance ratio in dense pools requires continuous monitoring and rebalancing, that
is moving databases from more utilized pools to less utilized pools, and creating new pools as necessary to
accommodate increased workload.
NOTE
For DTU elastic pools, the eDTU metric at the pool level is not a MAX or a SUM of individual database utilization. It is
derived from the utilization of various pool level metrics. Pool level resource limits may be higher than individual database
level limits, so it is possible that an individual database can reach a specific resource limit (CPU, data IO, log IO, etc.), even
when the eDTU reporting for the pool indicates no limit been reached.

Do not move "hot" databases . If resource contention at the pool level is primarily caused by a small number
of highly utilized databases, it may be tempting to move these databases to a less utilized pool, or make them
standalone databases. However, doing this while a database remains highly utilized is not recommended,
because the move operation will further degrade performance, both for the database being moved, and for the
entire pool. Instead, either wait until high utilization subsides, or move less utilized databases instead to relieve
resource pressure at the pool level. But moving databases with very low utilization does not provide any benefit
in this case, because it does not materially reduce resource utilization at the pool level.
Create new databases in a "quarantine" pool . In scenarios where new databases are created frequently,
such as applications using the tenant-per-database model, there is risk that a new database placed into an
existing elastic pool will unexpectedly consume significant resources and affect other databases and internal
processes in the pool. To mitigate this risk, create a separate "quarantine" pool with ample allocation of
resources. Use this pool for new databases with yet unknown resource consumption patterns. Once a database
has stayed in this pool for a business cycle, such as a week or a month, and its resource consumption is known,
it can be moved to a pool with sufficient capacity to accommodate this additional resource usage.
Monitor both used and allocated space . When allocated pool space (total size of all database files in
storage for all databases in a pool) reaches maximum pool size, out-of-space errors may occur. If allocated space
trends high and is on track to reach maximum pool size, mitigation options include:
Move some databases out of the pool to reduce total allocated space
Shrink database files to reduce empty allocated space in files
Scale up the pool to a service objective with a larger maximum pool size
If used pool space (total size of data in all databases in a pool, not including empty space in files) trends high
and is on track to reach maximum pool size, mitigation options include:
Move some databases out of the pool to reduce total used space
Move (archive) data outside of the database, or delete no longer needed data
Implement data compression
Scale up the pool to a service objective with a larger maximum pool size
Avoid overly dense ser vers . Azure SQL Database supports up to 5000 databases per server. Customers
using elastic pools with thousands of databases may consider placing multiple elastic pools on a single server,
with the total number of databases up to the supported limit. However, servers with many thousands of
databases create operational challenges. Operations that require enumerating all databases on a server, for
example viewing databases in the portal, will be slower. Operational errors, such as incorrect modification of
server level logins or firewall rules, will affect a larger number of databases. Accidental deletion of the server
will require assistance from Microsoft Support to recover databases on the deleted server, and will cause a
prolonged outage for all affected databases.
Limit the number of databases per server to a lower number than the maximum supported. In many scenarios,
using up to 1000-2000 databases per server is optimal. To reduce the likelihood of accidental server deletion,
place a delete lock on the server or its resource group.

Examples
View individual database capacity settings
Use the sys.dm_user_db_resource_governance dynamic management view to view the actual configuration and
capacity settings used by resource governance in the current database or elastic pool. For more information, see
sys.dm_user_db_resource_governance.
Run this query in any database in an elastic pool. All databases in the pool have the same resource governance
settings.

SELECT * FROM sys.dm_user_db_resource_governance AS rg


WHERE database_id = DB_ID();

Monitoring overall elastic pool resource consumption


Use the sys.elastic_pool_resource_stats system catalog view to monitor the resource consumption of the
entire pool. For more information, see sys.elastic_pool_resource_stats.
This sample query to view the last 10 minutes should be run in the master database of the logical Azure SQL
server containing the desired elastic pool.

SELECT * FROM sys.elastic_pool_resource_stats AS rs


WHERE rs.start_time > DATEADD(mi, -10, SYSUTCDATETIME())
AND rs.elastic_pool_name = '<elastic pool name>';

Monitoring individual database resource consumption


Use the sys.dm_db_resource_stats dynamic management view to monitor the resource consumption of
individual databases. For more information, see sys.dm_db_resource_stats. One row exists for every 15 seconds,
even if there is no activity. Historical data is maintained for approximately one hour.
This sample query to view the last 10 minutes of data should be run in the desired database.

SELECT * FROM sys.dm_db_resource_stats AS rs


WHERE rs.end_time > DATEADD(mi, -10, SYSUTCDATETIME());

For longer retention time with less frequency, consider the following query on sys.resource_stats , run in the
master database of the Azure SQL logical server. For more information, see sys.resource_stats (Azure SQL
Database). One row exists every five minutes, and historical data is maintained for two weeks.

SELECT * FROM sys.resource_stats


WHERE [database_name] = 'sample'
ORDER BY [start_time] desc;

Monitoring memory utilization


This query calculates the oom_per_second metric for each resource pool for recent history, based on the number
of snapshots available. This sample query helps identify the recent average number of failed memory
allocations in the pool. This query can be run in any database in an elastic pool.

SELECT pool_id,
name AS resource_pool_name,
IIF(name LIKE 'SloSharedPool%' OR name LIKE 'UserPool%', 'user', 'system') AS resource_pool_type,
SUM(CAST(delta_out_of_memory_count AS decimal))/(SUM(duration_ms)/1000.) AS oom_per_second
FROM sys.dm_resource_governor_resource_pools_history_ex
GROUP BY pool_id, name
ORDER BY pool_id;
Monitoring tempdb log space utilization
This query returns the current value of the tempdb_log_used_percent metric, showing the relative utilization of
the tempdb transaction log relative to its maximum allowed size. This query can be run in any database in an
elastic pool.

SELECT (lsu.used_log_space_in_bytes / df.log_max_size_bytes) * 100 AS tempdb_log_space_used_percent


FROM tempdb.sys.dm_db_log_space_usage AS lsu
CROSS JOIN (
SELECT SUM(CAST(max_size AS bigint)) * 8 * 1024. AS log_max_size_bytes
FROM tempdb.sys.database_files
WHERE type_desc = N'LOG'
) AS df
;

Next steps
For an introduction to elastic pools, see Elastic pools help you manage and scale multiple databases in Azure
SQL Database.
For information on tuning query workloads to reduce resource utilization, see Monitoring and tuning, and
Monitoring and performance tuning.
How to manage a Hyperscale database
7/12/2022 • 14 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Hyperscale service tier provides a highly scalable storage and compute performance tier that leverages the
Azure architecture to scale out storage and compute resources for an Azure SQL Database substantially beyond
the limits available for the General Purpose and Business Critical service tiers. This article describes how to carry
out essential administration tasks for Hyperscale databases, including migrating an existing database to
Hyperscale, restoring a Hyperscale database to a different region, reverse migrating from Hyperscale to another
service tier, and monitoring the status of ongoing and recent operations against a Hyperscale database.
Learn how to create a new Hyperscale database in Quickstart: Create a Hyperscale database in Azure SQL
Database.

Migrate an existing database to Hyperscale


You can migrate existing databases in Azure SQL Database to Hyperscale using the Azure portal, the Azure CLI,
PowerShell, or Transact-SQL.
The time required to move an existing database to Hyperscale consists of the time to copy data and the time to
replay the changes made in the source database while copying data. The data copy time is proportional to data
size. We recommend migrating to Hyperscale during a lower write activity period so that the time to replay
accumulated changes to replay will be shorter.
You will only experience a short period of downtime, generally a few minutes, during the final cutover to the
Hyperscale service tier.
Prerequisites
To move a database that is a part of a geo-replication relationship, either as the primary or as a secondary, to
Hyperscale, you need to first terminate data replication between the primary and secondary replica. Databases
in a failover group must be removed from the group first.
Once a database has been moved to Hyperscale, you can create a new Hyperscale geo-replica for that database.
Geo-replication for Hyperscale is in preview with certain limitations.
How to migrate a database to the Hyperscale service tier
To migrate an existing database in Azure SQL Database to the Hyperscale service tier, first identify your target
service objective. Review resource limits for single databases if you aren't sure which service objective is right
for your database. In many cases, you can choose a service objective with the same number of vCores and the
same hardware generation as the original database. If needed, you will be able to adjust this later with minimal
downtime.
Select the tab for your preferred tool to migrate your database:
Portal
Azure CLI
PowerShell
Transact-SQL

The Azure portal enables you to migrate to the Hyperscale service tier by modifying the pricing tier for your
database.
1. Navigate to the database you wish to migrate in the Azure portal.
2. In the left navigation bar, select Compute + storage .
3. Select the Ser vice tier drop-down to expand the options for service tiers.
4. Select Hyperscale (On-demand scalable storage) from the dropdown menu.
5. Review the Hardware Configuration listed. If desired, select Change configuration to select the
appropriate hardware configuration for your workload.
6. Review the option to Save money . Select it if you qualify for Azure Hybrid Benefit and wish to use it for this
database.
7. Select the vCores slider if you wish to change the number of vCores available for your database under the
Hyperscale service tier.
8. Select the High-AvailabilitySecondar yReplicas slider if you wish to change the number of replicas under
the Hyperscale service tier.
9. Select Apply .
You can monitor operations for a Hyperscale database while the operation is ongoing.

Reverse migrate from Hyperscale (preview)


Reverse migration to the General Purpose service tier allows customers who have recently migrated an existing
database in Azure SQL Database to the Hyperscale service tier to move back in an emergency, should
Hyperscale not meet their needs. While reverse migration is initiated by a service tier change, it's essentially a
size-of-data move between different architectures.
Limitations for reverse migration
Reverse migration is available under the following conditions:
Reverse migration is only available within 45 days of the original migration to Hyperscale.
Databases originally created in the Hyperscale service tier are not eligible for reverse migration.
You may reverse migrate to the General Purpose service tier only. Your migration from Hyperscale to General
Purpose can target either the serverless or provisioned compute tiers. If you wish to migrate the database to
another service tier, such as Business Critical or a DTU based service tier, first reverse migrate to the General
Purpose service tier, then change the service tier.
Duration and downtime
Unlike regular service level objective change operations in Hyperscale, migrating to Hyperscale and reverse
migration to General Purpose are size-of-data operations.
The duration of a reverse migration depends mainly on the size of the database and concurrent write activities
happening during the migration. The number of vCores you assign to the target General Purpose database will
also impact the duration of the reverse migration. We recommend that the target General Purpose database be
provisioned with a number of vCores greater than or equal to the number of vCores assigned to the source
Hyperscale database to sustain similar workloads.
During reverse migration, the source Hyperscale database may experience performance degradation if under
substantial load. Specifically, transaction log rate may be reduced (throttled) to ensure that reverse migration is
making progress.
You will only experience a short period of downtime, generally a few minutes, during the final cutover to the
new target General Purpose database.
Prerequisites
Before you initiate a reverse migration from Hyperscale to the General Purpose service tier, you must ensure
that your database meets the limitations for reverse migration and:
Your database does not have Geo Replication enabled.
Your database does not have named replicas.
Your database (allocated size) is small enough to fit into the target service tier.
If you specify max database size for the target General Purpose database, ensure the allocated size of the
database is small enough to fit into that maximum size.
Prerequisite checks will occur before a reverse migration starts. If prerequisites are not met, the reverse
migration will fail immediately.
Backup policies
You will be billed using the regular pricing for all existing database backups within the configured retention
period. You will be billed for the Hyperscale backup storage snapshots and for size-of-data storage blobs that
must be retained to be able to restore the backup.
You can migrate a database to Hyperscale and reverse migrate back to General Purpose multiple times. Only
backups from the current and once-previous tier of your database will be available for restore. If you have
moved from the General Purpose service tier to Hyperscale and back to General Purpose, the only backups
available are the ones from the current General Purpose database and the immediately previous Hyperscale
database. These retained backups are billed as per Azure SQL Database billing. Any previous tiers tried won't
have backups available and will not be billed.
For example, you could migrate between Hyperscale and non-Hyperscale service tiers:
1. General Purpose
2. Migrate to Hyperscale
3. Reverse migrate to General Purpose
4. Service tier change to Business Critical
5. Migrate to Hyperscale
6. Reverse migrate to General Purpose
In this case, the only backups available would be from steps 5 and 6 of the timeline, if they are still within the
configured retention period. Any backups from previous steps would be unavailable. This should be a careful
consideration when attempting multiple reverse migrations from Hyperscale to the General Purpose tier.
How to reverse migrate a Hyperscale database to the General Purpose service tier
To reverse migrate an existing Hyperscale database in Azure SQL Database to the General Purpose service tier,
first identify your target service objective in the General Purpose service tier and whether you wish to migrate
to the provisioned or serverless compute tiers. Review resource limits for single databases if you aren't sure
which service objective is right for your database.
If you wish to perform an additional service tier change after reverse migrating to General Purpose, identify
your eventual target service objective as well and ensure that your database's allocated size is small enough to
fit in that service objective.
Select the tab for your preferred method to reverse migrate your database:
Portal
Azure CLI
PowerShell
Transact-SQL

The Azure portal enables you to reverse migrate to the General Purpose service tier by modifying the pricing
tier for your database.

1. Navigate to the database you wish to migrate in the Azure portal.


2. In the left navigation bar, select Compute + storage .
3. Select the Ser vice tier drop-down to expand the options for service tiers.
4. Select General Purpose (Scalable compute and storage options) from the dropdown menu.
5. Review the Hardware Configuration listed. If desired, select Change configuration to select the
appropriate hardware configuration for your workload.
6. Review the option to Save money . Select it if you qualify for Azure Hybrid Benefit and wish to use it for this
database.
7. Select the vCores slider if you wish to change the number of vCores available for your database under the
General Purpose service tier.
8. Select Apply .

Monitor operations for a Hyperscale database


You can monitor the status of ongoing or recently completed operations for an Azure SQL Database using the
Azure portal, the Azure CLI, PowerShell, or Transact-SQL.
Select the tab for your preferred method to monitor operations.

Portal
Azure CLI
PowerShell
Transact-SQL

The Azure portal shows a notification for a database in Azure SQL Database when an operation such as a
migration, reverse migration, or restore is in progress.

1. Navigate to the database in the Azure portal.


2. In the left navigation bar, select Over view .
3. Review the Notifications section at the bottom of the right pane. If operations are ongoing, a notification
box will appear.
4. Select the notification box to view details.
5. The Ongoing operations pane will open. Review the details of the ongoing operations.

View databases in the Hyperscale service tier


After migrating a database to Hyperscale or reconfiguring a database within the Hyperscale service tier, you may
wish to view and/or document the configuration of your Hyperscale database.
Portal
Azure CLI
PowerShell
Transact-SQL

The Azure portal shows a list of all databases on a logical server. The Pricing tier column includes the service
tier for each database.
1. Navigate to your logical server in the Azure portal.
2. In the left navigation bar, select Over view .
3. Scroll to the list of resources at the bottom of the pane. The window will display SQL elastic pools and
databases on the logical server.
4. Review the Pricing tier column to identify databases in the Hyperscale service tier.

Next steps
Learn more about Hyperscale databases in the following articles:
Quickstart: Create a Hyperscale database in Azure SQL Database
Hyperscale service tier
Azure SQL Database Hyperscale FAQ
Hyperscale secondary replicas
Azure SQL Database Hyperscale named replicas FAQ
SQL Hyperscale performance troubleshooting
diagnostics
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


To troubleshoot performance problems in a Hyperscale database, general performance tuning methodologies
on the Azure SQL Database compute node is the starting point of a performance investigation. However, given
the distributed architecture of Hyperscale, additional diagnostics have been added to assist. This article
describes Hyperscale-specific diagnostic data.

Log rate throttling waits


Every Azure SQL Database service level has log generation rate limits enforced via log rate governance. In
Hyperscale, the log generation limit is currently set to 100 MB/sec, regardless of the service level. However,
there are times when the log generation rate on the primary compute replica has to be throttled to maintain
recoverability SLAs. This throttling happens when a page server or another compute replica is significantly
behind applying new log records from the log service.
The following wait types (in sys.dm_os_wait_stats) describe the reasons why log rate can be throttled on the
primary compute replica:

WA IT T Y P E DESC RIP T IO N

RBIO_RG_STORAGE Occurs when a Hyperscale database primary compute node


log generation rate is being throttled due to delayed log
consumption at the page server(s).

RBIO_RG_DESTAGE Occurs when a Hyperscale database compute node log


generation rate is being throttled due to delayed log
consumption by the long-term log storage.

RBIO_RG_REPLICA Occurs when a Hyperscale database compute node log


generation rate is being throttled due to delayed log
consumption by the readable secondary replica(s).

RBIO_RG_GEOREPLICA Occurs when a Hyperscale database compute node log


generation rate is being throttled due to delayed log
consumption by the Geo-secondary replica.

RBIO_RG_LOCALDESTAGE Occurs when a Hyperscale database compute node log


generation rate is being throttled due to delayed log
consumption by the log service.

Page server reads


The compute replicas do not cache a full copy of the database locally. The data local to the compute replica is
stored in the buffer pool (in memory) and in the local resilient buffer pool extension (RBPEX) cache that is a
partial (non-covering) cache of data pages. This local RBPEX cache is sized proportionally to the compute size
and is three times the memory of the compute tier. RBPEX is similar to the buffer pool in that it has the most
frequently accessed data. Each page server, on the other hand, has a covering RBPEX cache for the portion of the
database it maintains.
When a read is issued on a compute replica, if the data doesn't exist in the buffer pool or local RBPEX cache, a
getPage(pageId, LSN) function call is issued, and the page is fetched from the corresponding page server. Reads
from page servers are remote reads and are thus slower than reads from the local RBPEX. When
troubleshooting IO-related performance problems, we need to be able to tell how many IOs were done via
relatively slower remote page server reads.
Several dynamic managed views (DMVs) and extended events have columns and fields that specify the number
of remote reads from a page server, which can be compared against the total reads. Query store also captures
remote reads as part of the query run time stats.
Columns to report page server reads are available in execution DMVs and catalog views, such as:
sys.dm_exec_requests
sys.dm_exec_query_stats
sys.dm_exec_procedure_stats
sys.dm_exec_trigger_stats
sys.query_store_runtime_stats
Page server reads are added to the following extended events:
sql_statement_completed
sp_statement_completed
sql_batch_completed
rpc_completed
scan_stopped
query_store_begin_persist_runtime_stat
query-store_execution_runtime_info
ActualPageServerReads/ActualPageServerReadAheads are added to query plan XML for actual plans. For
example:
<RunTimeCountersPerThread Thread="8" ActualRows="90466461" ActualRowsRead="90466461" Batches="0"
ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="133645"
ActualCPUms="85105" ActualScans="1" ActualLogicalReads="6032256" ActualPhysicalReads="0"
ActualPageServerReads="0" ActualReadAheads="6027814" ActualPageServerReadAheads="5687297"
ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobPageServerReads="0" ActualLobReadAheads="0"
ActualLobPageServerReadAheads="0" />

NOTE
To view these attributes in the query plan properties window, SSMS 18.3 or later is required.

Virtual file stats and IO accounting


In Azure SQL Database, the sys.dm_io_virtual_file_stats() DMF is the primary way to monitor SQL Database IO.
IO characteristics in Hyperscale are different due to its distributed architecture. In this section, we focus on IO
(reads and writes) to data files as seen in this DMF. In Hyperscale, each data file visible in this DMF corresponds
to a remote page server. The RBPEX cache mentioned here is a local SSD-based cache, that is a non-covering
cache on the compute replica.
Local RBPEX cache usage
Local RBPEX cache exists on the compute replica, on local SSD storage. Thus, IO against this cache is faster than
IO against remote page servers. Currently, sys.dm_io_virtual_file_stats() in a Hyperscale database has a special
row reporting the IO against the local RBPEX cache on the compute replica. This row has the value of 0 for both
database_id and file_id columns. For example, the query below returns RBPEX usage statistics since database
startup.
select * from sys.dm_io_virtual_file_stats(0,NULL);

A ratio of reads done on RBPEX to aggregated reads done on all other data files provides RBPEX cache hit ratio.
The counter RBPEX cache hit ratio is also exposed in the performance counters DMV
sys.dm_os_performance_counters .

Data reads
When reads are issued by the SQL Server database engine on a compute replica, they may be served either
by the local RBPEX cache, or by remote page servers, or by a combination of the two if reading multiple
pages.
When the compute replica reads some pages from a specific file, for example file_id 1, if this data resides
solely on the local RBPEX cache, all IO for this read is accounted against file_id 0 (RBPEX). If some part of that
data is in the local RBPEX cache, and some part is on a remote page server, then IO is accounted towards
file_id 0 for the part served from RBPEX, and the part served from the remote page server is accounted
towards file_id 1.
When a compute replica requests a page at a particular LSN from a page server, if the page server has not
caught up to the LSN requested, the read on the compute replica will wait until the page server catches up
before the page is returned to the compute replica. For any read from a page server on the compute replica,
you will see the PAGEIOLATCH_* wait type if it is waiting on that IO. In Hyperscale, this wait time includes
both the time to catch up the requested page on the page server to the LSN required, and the time needed to
transfer the page from the page server to the compute replica.
Large reads such as read-ahead are often done using "Scatter-Gather" Reads. This allows reads of up to 4 MB
of pages at a time, considered a single read in the SQL Server database engine. However, when data being
read is in RBPEX, these reads are accounted as multiple individual 8-KB reads, since the buffer pool and
RBPEX always use 8-KB pages. As the result, the number of read IOs seen against RBPEX may be larger than
the actual number of IOs performed by the engine.
Data writes
The primary compute replica does not write directly to page servers. Instead, log records from the log
service are replayed on corresponding page servers.
Writes that happen on the compute replica are predominantly writes to the local RBPEX (file_id 0). For writes
on logical files that are larger than 8 KB, in other words those done using Gather-write, each write operation
is translated into multiple 8-KB individual writes to RBPEX since the buffer pool and RBPEX always use 8-KB
pages. As the result, the number of write IOs seen against RBPEX may be larger than the actual number of
IOs performed by the engine.
Non-RBPEX files, or data files other than file_id 0 that correspond to page servers, also show writes. In the
Hyperscale service tier, these writes are simulated, because the compute replicas never write directly to page
servers. Write IOPS and throughput are accounted as they occur on the compute replica, but latency for data
files other than file_id 0 does not reflect the actual latency of page server writes.
Log writes
On the primary compute, a log write is accounted for in file_id 2 of sys.dm_io_virtual_file_stats. A log write
on primary compute is a write to the log Landing Zone.
Log records are not hardened on the secondary replica on a commit. In Hyperscale, log is applied by the log
service to the secondary replicas asynchronously. Because log writes don't actually occur on secondary
replicas, any accounting of log IOs on the secondary replicas is for tracking purposes only.

Data IO in resource utilization statistics


In a non-Hyperscale database, combined read and write IOPS against data files, relative to the resource
governance data IOPS limit, are reported in sys.dm_db_resource_stats and sys.resource_stats views, in the
avg_data_io_percent column. The same value is reported in the Azure portal as Data IO Percentage.
In a Hyperscale database, this column reports on data IOPS utilization relative to the limit for local storage on
compute replica only, specifically IO against RBPEX and tempdb . A 100% value in this column indicates that
resource governance is limiting local storage IOPS. If this is correlated with a performance problem, tune the
workload to generate less IO, or increase database service objective to increase the resource governance Max
Data IOPS limit. For resource governance of RBPEX reads and writes, the system counts individual 8-KB IOs,
rather than larger IOs that may be issued by the SQL Server database engine.
Data IO against remote page servers is not reported in resource utilization views or in the portal, but is reported
in the sys.dm_io_virtual_file_stats() DMF, as noted earlier.

Additional resources
For vCore resource limits for a Hyperscale single database see Hyperscale service tier vCore Limits
For monitoring Azure SQL Databases, enable Azure Monitor SQL Insights (preview)
For Azure SQL Database performance tuning, see Query performance in Azure SQL Database
For performance tuning using Query Store, see Performance monitoring using Query store
For DMV monitoring scripts, see Monitoring performance Azure SQL Database using dynamic management
views
What is Block T-SQL CRUD feature?
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This feature allows Azure administrators to block the creation or modification of Azure SQL resources through
T-SQL. This is enforced at the subscription level to block T-SQL commands from affecting SQL resources in any
Azure SQL database or managed instance.

Overview
To block creation or modification of resources through T-SQL and enforce resource management through an
Azure Resource Manager template (ARM template) for a given subscription, the subscription level preview
features in Azure portal can be used. This is particularly useful when you are using Azure Policies to enforce
organizational standards through ARM templates. Since T-SQL does not adhere to the Azure Policies, a block on
T-SQL create or modify operations can be applied. The syntax blocked includes CRUD (create, update, delete)
statements for databases in Azure SQL, specifically CREATE DATABASE , ALTER DATABASE , and DROP DATABASE
statements.
T-SQL CRUD operations can be blocked via Azure portal, PowerShell, or Azure CLI.

Permissions
In order to register or remove this feature, the Azure user must be a member of the Owner or Contributor role
of the subscription.

Examples
The following section describes how you can register or unregister a preview feature with Microsoft.Sql
resource provider in Azure portal:
Register Block T -SQL CRUD
1. Go to your subscription on Azure portal.
2. Select on Preview Features tab.
3. Select Block T-SQL CRUD .
4. After you select on Block T-SQL CRUD , a new window will open, select Register , to register this block with
Microsoft.Sql resource provider.
Re -register Microsoft.sql resource provider
After you register the block of T-SQL CRUD with Microsoft.Sql resource provider, you must re-register the
Microsoft.Sql resource provider for the changes to take effect. To re-register the Microsoft.Sql resource provider:
1. Go to your subscription on Azure portal.
2. Select on Resource Providers tab.
3. Search and select Microsoft.Sql resource provider.
4. Select Re-register .

NOTE
The re-registration step is mandatory for the T-SQL block to be applied to your subscription.
Removing Block T -SQL CRUD
To remove the block on T-SQL create or modify operations from your subscription, first unregister the
previously registered T-SQL block. Then, re-register the Microsoft.Sql resource provider as shown above for the
removal of T-SQL block to take effect.

Next steps
An overview of Azure SQL Database security capabilities
Azure SQL Database security best practices
Manage databases in Azure SQL Database by using
Azure Automation
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


This guide will introduce you to the Azure Automation service, and how it can be used to simplify the
management of databases in Azure SQL Database.

About Azure Automation


Azure Automation is an Azure service for simplifying cloud management through process automation. Using
Azure Automation, long-running, manual, error-prone, and frequently repeated tasks can be automated to
increase reliability, efficiency, and time to value for your organization. For information on getting started, see
Azure Automation intro
Azure Automation provides a workflow execution engine with high reliability and high availability, and that
scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off
manually, by third-party systems, or at scheduled intervals so that tasks happen exactly when needed.
Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by moving
your cloud management tasks to be run automatically by Azure Automation.

How Azure Automation can help manage your databases


With Azure Automation, you can manage databases in Azure SQL Database by using PowerShell cmdlets that
are available in the Azure PowerShell tools. Azure Automation has these Azure SQL Database PowerShell
cmdlets available out of the box, so that you can perform all of your SQL Database management tasks within the
service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to
automate complex tasks across Azure services and across third-party systems.
Azure Automation also has the ability to communicate with SQL servers directly, by issuing SQL commands
using PowerShell.
The runbook and module galleries for Azure Automation offer a variety of runbooks from Microsoft and the
community that you can import into Azure Automation. To use one, download a runbook from the gallery, or
you can directly import runbooks from the gallery, or from your Automation account in the Azure portal.

NOTE
The Automation runbook may run from a range of IP addresses at any datacenter in an Azure region. To learn more, see
Automation region DNS records.

Next steps
Now that you've learned the basics of Azure Automation and how it can be used to manage Azure SQL
Database, follow these links to learn more about Azure Automation.
Azure Automation Overview
My first runbook
Automate management tasks using elastic jobs
(preview)
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You can create and schedule elastic jobs that could be periodically executed against one or many Azure SQL
databases to run Transact-SQL (T-SQL) queries and perform maintenance tasks.
You can define target database or groups of databases where the job will be executed, and also define schedules
for running a job. A job handles the task of logging in to the target database. You also define, maintain, and
persist Transact-SQL scripts to be executed across a group of databases.
Every job logs the status of execution and also automatically retries the operations if any failure occurs.

When to use elastic jobs


There are several scenarios when you could use elastic job automation:
Automate management tasks and schedule them to run every weekday, after hours, etc.
Deploy schema changes, credentials management, performance data collection or tenant (customer)
telemetry collection.
Update reference data (information common across all databases), load data from Azure Blob storage.
Configure jobs to execute across a collection of databases on a recurring basis, such as during off-peak
hours.
Collect query results from a set of databases into a central table on an on-going basis. Performance
queries can be continually executed and configured to trigger additional tasks to be executed.
Collect data for reporting
Aggregate data from a collection of databases into a single destination table.
Execute longer running data processing queries across a large set of databases, for example the
collection of customer telemetry. Results are collected into a single destination table for further
analysis.
Data movements
Automation on other platforms
Consider the following job scheduling technologies on different platforms:
Elastic Jobs are Job Scheduling services that execute custom jobs on one or many databases in Azure SQL
Database.
SQL Agent Jobs are executed by the SQL Agent service that continues to be used for task automation in
SQL Server and is also included with Azure SQL Managed Instances. SQL Agent Jobs are not available in
Azure SQL Database.
Elastic Jobs can target Azure SQL Databases, Azure SQL Database elastic pools, and Azure SQL Databases in
shard maps.
For T-SQL script job automation in SQL Server and Azure SQL Managed Instance, consider SQL Agent.
For T-SQL script job automation in Azure Synapse Analytics, consider pipelines with recurring triggers,
which are based on Azure Data Factory.
It is worth noting differences between SQL Agent (available in SQL Server and as part of SQL Managed
Instance), and the Database Elastic Job agent (which can execute T-SQL on Azure SQL Databases or databases in
SQL Server and Azure SQL Managed Instance, Azure Synapse Analytics).

EL A ST IC JO B S SQ L A GEN T

Scope Any number of databases in Azure Any individual database in the same
SQL Database and/or data warehouses instance as the SQL agent. The Multi
in the same Azure cloud as the job Server Administration feature of SQL
agent. Targets can be in different Server Agent allows for master/target
servers, subscriptions, and/or regions. instances to coordinate job execution,
though this feature is not available in
Target groups can be composed of SQL managed instance.
individual databases or data
warehouses, or all databases in a
server, pool, or shard map (dynamically
enumerated at job runtime).

Suppor ted APIs and Tools Portal, PowerShell, T-SQL, Azure T-SQL, SQL Server Management
Resource Manager Studio (SSMS)

Elastic job targets


Elastic Jobs provide the ability to run one or more T-SQL scripts in parallel, across a large number of
databases, on a schedule or on-demand.
You can run scheduled jobs against any combination of databases: one or more individual databases, all
databases on a server, all databases in an elastic pool, or shard map, with the added flexibility to include or
exclude any specific database. Jobs can run across multiple servers, multiple pools, and can even run against
databases in different subscriptions. Servers and pools are dynamically enumerated at runtime, so jobs run
against all databases that exist in the target group at the time of execution.
The following image shows a job agent executing jobs across the different types of target groups:

Elastic job components


DESC RIP T IO N ( A DDIT IO N A L DETA IL S A RE B ELO W T H E
C O M P O N EN T TA B L E)

Elastic Job agent The Azure resource you create to run and manage Jobs.
DESC RIP T IO N ( A DDIT IO N A L DETA IL S A RE B ELO W T H E
C O M P O N EN T TA B L E)

Job database A database in Azure SQL Database that the job agent uses
to store job related data, job definitions, etc.

Target group The set of servers, pools, databases, and shard maps to run
a job against.

Job A job is a unit of work that is composed of one or more job


steps. Job steps specify the T-SQL script to run, as well as
other details required to execute the script.

Elastic job agent


An Elastic Job agent is the Azure resource for creating, running, and managing jobs. The Elastic Job agent is an
Azure resource you create in the portal (PowerShell and REST are also supported).
Creating an Elastic Job agent requires an existing database in Azure SQL Database. The agent configures this
existing Azure SQL Database as the Job database.
The Elastic Job agent is free. The job database is billed at the same rate as any database in Azure SQL Database.
Elastic job database
The Job database is used for defining jobs and tracking the status and history of job executions. The Job
database is also used to store agent metadata, logs, results, job definitions, and also contains many useful stored
procedures and other database objects for creating, running, and managing jobs using T-SQL.
For the current preview, an existing database in Azure SQL Database (S0 or higher) is required to create an
Elastic Job agent.
The Job database should be a clean, empty, S0 or higher service objective Azure SQL Database. The
recommended service objective of the Job database is S1 or higher, but the optimal choice depends on the
performance needs of your job(s): the number of job steps, the number of job targets, and how frequently jobs
are run.
If operations against the job database are slower than expected, monitor database performance and the
resource utilization in the job database during periods of slowness using Azure portal or the
sys.dm_db_resource_stats DMV. If utilization of a resource, such as CPU, Data IO, or Log Write approaches 100%
and correlates with periods of slowness, consider incrementally scaling the database to higher service objectives
(either in the DTU model or in the vCore model) until job database performance is sufficiently improved.
El a st i c j o b d a t a b a se p e r m i ssi o n s

During job agent creation, a schema, tables, and a role called jobs_reader are created in the Job database. The
role is created with the following permission and is designed to give administrators finer access control for job
monitoring:

'JO B S_IN T ERN A L ' SC H EM A


RO L E N A M E 'JO B S' SC H EM A P ERM ISSIO N S P ERM ISSIO N S

jobs_reader SELECT None

IMPORTANT
Consider the security implications before granting access to the Job database as a database administrator. A malicious
user with permissions to create or edit jobs could create or edit a job that uses a stored credential to connect to a
database under the malicious user's control, which could allow the malicious user to determine the credential's password.
Target group
A target group defines the set of databases a job step will execute on. A target group can contain any number
and combination of the following:
Logical SQL ser ver - if a server is specified, all databases that exist in the server at the time of the job
execution are part of the group. The master database credential must be provided so that the group can be
enumerated and updated prior to job execution. For more information on logical servers, see What is a
server in Azure SQL Database and Azure Synapse Analytics?.
Elastic pool - if an elastic pool is specified, all databases that are in the elastic pool at the time of the job
execution are part of the group. As for a server, the master database credential must be provided so that the
group can be updated prior to the job execution.
Single database - specify one or more individual databases to be part of the group.
Shard map - databases of a shard map.

TIP
At the moment of job execution, dynamic enumeration re-evaluates the set of databases in target groups that include
servers or pools. Dynamic enumeration ensures that jobs run across all databases that exist in the ser ver or
pool at the time of job execution . Re-evaluating the list of databases at runtime is specifically useful for scenarios
where pool or server membership changes frequently.

Pools and single databases can be specified as included or excluded from the group. This enables creating a
target group with any combination of databases. For example, you can add a server to a target group, but
exclude specific databases in an elastic pool (or exclude an entire pool).
A target group can include databases in multiple subscriptions, and across multiple regions. Note that cross-
region executions have higher latency than executions within the same region.
The following examples show how different target group definitions are dynamically enumerated at the
moment of job execution to determine which databases the job will run:

Example 1 shows a target group that consists of a list of individual databases. When a job step is executed
using this target group, the job step's action will be executed in each of those databases.
Example 2 shows a target group that contains a server as a target. When a job step is executed using this target
group, the server is dynamically enumerated to determine the list of databases that are currently in the server.
The job step's action will be executed in each of those databases.
Example 3 shows a similar target group as Example 2, but an individual database is specifically excluded. The
job step's action will not be executed in the excluded database.
Example 4 shows a target group that contains an elastic pool as a target. Similar to Example 2, the pool will be
dynamically enumerated at job run time to determine the list of databases in the pool.

Example 5 and Example 6 show advanced scenarios where servers, elastic pools, and databases can be
combined using include and exclude rules.
Example 7 shows that the shards in a shard map can also be evaluated at job run time.

NOTE
The Job database itself can be the target of a job. In this scenario, the Job database is treated just like any other target
database. The job user must be created and granted sufficient permissions in the Job database, and the database scoped
credential for the job user must also exist in the Job database, just like it does for any other target database.

Elastic jobs and job steps


A job is a unit of work that is executed on a schedule or as a one-time job. A job consists of one or more job
steps.
Each job step specifies a T-SQL script to execute, one or more target groups to run the T-SQL script against, and
the credentials the job agent needs to connect to the target database. Each job step has customizable timeout
and retry policies, and can optionally specify output parameters.
Job output
The outcome of a job's steps on each target database are recorded in detail, and script output can be captured to
a specified table. You can specify a database to save any data returned from a job.
Job history
View Elastic Job execution history in the Job database by querying the table jobs.job_executions. A system
cleanup job purges execution history that is older than 45 days. To remove history less than 45 days old, call the
sp_purge_jobhistory stored procedure in the Job database.

Job status
You can monitor Elastic Job executions in the Job database by querying the table jobs.job_executions.
Agent performance, capacity, and limitations
Elastic Jobs use minimal compute resources while waiting for long-running jobs to complete.
Depending on the size of the target group of databases and the desired execution time for a job (number of
concurrent workers), the agent requires different amounts of compute and performance of the Job database
(the more targets and the higher number of jobs, the higher the amount of compute required).
Currently, the limit is 100 concurrent jobs.
Prevent jobs from reducing target database performance
To ensure resources aren't overburdened when running jobs against databases in a SQL elastic pool, jobs can be
configured to limit the number of databases a job can run against at the same time.

Next steps
How to create and manage elastic jobs
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Create, configure, and manage elastic jobs (preview)
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this article, you will learn how to create, configure, and manage elastic jobs.
If you have not used Elastic jobs, learn more about the job automation concepts in Azure SQL Database.

Create and configure the agent


1. Create or identify an empty S0 or higher database. This database will be used as the Job database during
Elastic Job agent creation.
2. Create an Elastic Job agent in the portal or with PowerShell.

Create, run, and manage jobs


1. Create a credential for job execution in the Job database using PowerShell or T-SQL.
2. Define the target group (the databases you want to run the job against) using PowerShell or T-SQL.
3. Create a job agent credential in each database the job will run (add the user (or role) to each database in
the group). For an example, see the PowerShell tutorial.
4. Create a job using PowerShell or T-SQL.
5. Add job steps using PowerShell or T-SQL.
6. Run a job using PowerShell or T-SQL.
7. Monitor job execution status using the portal, PowerShell or T-SQL.
Credentials for running jobs
Jobs use database scoped credentials to connect to the databases specified by the target group upon execution.
If a target group contains servers or pools, these database scoped credentials are used to connect to the master
database to enumerate the available databases.
Setting up the proper credentials to run a job can be a little confusing, so keep the following points in mind:
The database scoped credentials must be created in the Job database.
All target databases must have a login with sufficient permissions for the job to complete
successfully ( jobuser in the diagram below).
Credentials can be reused across jobs, and the credential passwords are encrypted and secured from users
who have read-only access to job objects.
The following image is designed to assist in understanding and setting up the proper job credentials.
Remember to create the user in ever y database (all target user dbs ) the job needs to run .

Security best practices


A few best practice considerations for working with Elastic Jobs:
Limit usage of the APIs to trusted individuals.
Credentials should have the least privileges necessary to perform the job step. For more information, see
Authorization and Permissions.
When using a server and/or pool target group member, it is highly suggested to create a separate credential
with rights on the master database to view/list databases that is used to expand the database lists of the
server(s) and/or pool(s) prior to the job execution.

Agent performance, capacity, and limitations


Elastic Jobs use minimal compute resources while waiting for long-running jobs to complete.
Depending on the size of the target group of databases and the desired execution time for a job (number of
concurrent workers), the agent requires different amounts of compute and performance of the Job database
(the more targets and the higher number of jobs, the higher the amount of compute required).
Prevent jobs from reducing target database performance
To ensure resources aren't overburdened when running jobs against databases in a SQL elastic pool, jobs can be
configured to limit the number of databases a job can run against at the same time.
Set the number of concurrent databases a job runs on by setting the sp_add_jobstep stored procedure's
@max_parallelism parameter in T-SQL.

Known limitations
These are the current limitations to the Elastic Jobs service. We're actively working to remove as many of these
limitations as possible.

ISSUE DESC RIP T IO N

The Elastic Job agent needs to be recreated and started in The Elastic Jobs service stores all its job agent and job
the new region after a failover/move to a new Azure region. metadata in the jobs database. Any failover or move of
Azure resources to a new Azure region will also move the
jobs database, job agent and jobs metadata to the new
Azure region. However, the Elastic Job agent is a compute
only resource and needs to be explicitly re-created and
started in the new region before jobs will start executing
again in the new region. Once started, the Elastic Job agent
will resume executing jobs in the new region as per the
previously defined job schedule.

Concurrent jobs limit. Currently, the preview is limited to 100 concurrent jobs.

Excessive Audit logs from Jobs database The Elastic Job agent operates by constantly polling the Job
database to check for the arrival of new jobs and other
CRUD operations. If auditing is enabled on the server that
houses a Jobs database, a large amount of audit logs may
be generated by the Jobs database. This can be mitigated by
filtering out these audit logs using the
Set-AzSqlServerAudit command with a predicate
expression.

For example:
Set-AzSqlServerAudit -ResourceGroupName "ResourceGroup01" -ServerName "Server01" -
BlobStorageTargetState Enabled -StorageAccountResourceId "/subscriptions/7fe3301d-31d3-46
211a890ba6e3/resourceGroups/resourcegroup01/providers/Microsoft.Storage/storageAccounts/m
-PredicateExpression "database_principal_name <> '##MS_JobAccount##'"

This command will only filter out Job Agent to Jobs database
audit logs, not Job Agent to any target databases audit logs.

Private end points are not supported Databases and Elastic Pools targeted by Elastic Jobs should
have "Allow Azure Services and resources to access this
server" setting enabled at their server level in the current
preview. If this setting is not enabled, Elastic Job Agent will
not be able to execute jobs at those targets.

Best practices for creating jobs


Consider the following best practices when working with Elastic Database jobs:
Idempotent scripts
A job's T-SQL scripts must be idempotent. Idempotent means that if the script succeeds, and it is run again, the
same result occurs. A script may fail due to transient network issues. In that case, the job will automatically retry
running the script a preset number of times before desisting. An idempotent script has the same result even if
its been successfully run twice (or more).
A simple tactic is to test for the existence of an object before creating it. A hypothetical example is shown below:

IF NOT EXISTS (SELECT * FROM sys.objects WHERE [name] = N'some_object')


print 'Object does not exist'
-- Create the object
ELSE
print 'Object exists'
-- If it exists, drop the object before recreating it.

Similarly, a script must be able to execute successfully by logically testing for and countering any conditions it
finds.

Next steps
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Create an Elastic Job agent using PowerShell
(preview)
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Elastic jobs (preview) enable the running of one or more Transact-SQL (T-SQL) scripts in parallel across many
databases.
In this tutorial, you learn the steps required to run a query across multiple databases:
Create an Elastic Job agent
Create job credentials so that jobs can execute scripts on its targets
Define the targets (servers, elastic pools, databases, shard maps) you want to run the job against
Create database scoped credentials in the target databases so the agent connect and execute jobs
Create a job
Add job steps to a job
Start execution of a job
Monitor a job

Prerequisites
The upgraded version of Elastic Database jobs has a new set of PowerShell cmdlets for use during migration.
These new cmdlets transfer all of your existing job credentials, targets (including databases, servers, custom
collections), job triggers, job schedules, job contents, and jobs over to a new Elastic Job agent.
Install the latest Elastic Jobs cmdlets
If you don't have already have an Azure subscription, create a free account before you begin.
Install the Az.Sql module to get the latest Elastic Job cmdlets. Run the following commands in PowerShell with
administrative access.

# installs the latest PackageManagement and PowerShellGet packages


Find-Package PackageManagement | Install-Package -Force
Find-Package PowerShellGet | Install-Package -Force

# Restart your powershell session with administrative access

# Install and import the Az.Sql module, then confirm


Install-Module -Name Az.Sql
Import-Module Az.Sql

Get-Module Az.Sql

In addition to the Az.Sql module, this tutorial also requires the SqlServer PowerShell module. For details, see
Install SQL Server PowerShell module.

Create required resources


Creating an Elastic Job agent requires a database (S0 or higher) for use as the Job database.
The script below creates a new resource group, server, and database for use as the Job database. The second
script creates a second server with two blank databases to execute jobs against.
Elastic Jobs has no specific naming requirements so you can use whatever naming conventions you want, as
long as they conform to any Azure requirements.

# sign in to Azure account


Connect-AzAccount

# create a resource group


Write-Output "Creating a resource group..."
$resourceGroupName = Read-Host "Please enter a resource group name"
$location = Read-Host "Please enter an Azure Region"
$rg = New-AzResourceGroup -Name $resourceGroupName -Location $location
$rg

# create a server
Write-Output "Creating a server..."
$agentServerName = Read-Host "Please enter an agent server name"
$agentServerName = $agentServerName + "-" + [guid]::NewGuid()
$adminLogin = Read-Host "Please enter the server admin name"
$adminPassword = Read-Host "Please enter the server admin password"
$adminPasswordSecure = ConvertTo-SecureString -String $AdminPassword -AsPlainText -Force
$adminCred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList $adminLogin,
$adminPasswordSecure
$agentServer = New-AzSqlServer -ResourceGroupName $resourceGroupName -Location $location `
-ServerName $agentServerName -ServerVersion "12.0" -SqlAdministratorCredentials ($adminCred)

# set server firewall rules to allow all Azure IPs


Write-Output "Creating a server firewall rule..."
$agentServer | New-AzSqlServerFirewallRule -AllowAllAzureIPs
$agentServer

# create the job database


Write-Output "Creating a blank database to be used as the Job Database..."
$jobDatabaseName = "JobDatabase"
$jobDatabase = New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $agentServerName -
DatabaseName $jobDatabaseName -RequestedServiceObjectiveName "S0"
$jobDatabase

# create a target server and sample databases - uses the same credentials
Write-Output "Creating target server..."
$targetServerName = Read-Host "Please enter a target server name"
$targetServerName = $targetServerName + "-" + [guid]::NewGuid()
$targetServer = New-AzSqlServer -ResourceGroupName $resourceGroupName -Location $location `
-ServerName $targetServerName -ServerVersion "12.0" -SqlAdministratorCredentials ($adminCred)

# set target server firewall rules to allow all Azure IPs


$targetServer | New-AzSqlServerFirewallRule -AllowAllAzureIPs
$targetServer | New-AzSqlServerFirewallRule -StartIpAddress 0.0.0.0 -EndIpAddress 255.255.255.255 -
FirewallRuleName AllowAll
$targetServer

# create sample databases to execute jobs against


$db1 = New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $targetServerName -DatabaseName
"database1"
$db1
$db2 = New-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $targetServerName -DatabaseName
"database2"
$db2

Create the Elastic Job agent


An Elastic Job agent is an Azure resource for creating, running, and managing jobs. The agent executes jobs
based on a schedule or as a one-time job.
The New-AzSqlElasticJobAgent cmdlet requires a database in Azure SQL Database to already exist, so the
resourceGroupName, serverName, and databaseName parameters must all point to existing resources.

Write-Output "Creating job agent..."


$agentName = Read-Host "Please enter a name for your new Elastic Job agent"
$jobAgent = $jobDatabase | New-AzSqlElasticJobAgent -Name $agentName
$jobAgent

Create the job credentials


Jobs use database scoped credentials to connect to the target databases specified by the target group upon
execution and execute scripts. These database scoped credentials are also used to connect to the master
database to enumerate all the databases in a server or an elastic pool, when either of these are used as the
target group member type.
The database scoped credentials must be created in the job database. All target databases must have a login
with sufficient permissions for the job to complete successfully.

In addition to the credentials in the image, note the addition of the GRANT commands in the following script.
These permissions are required for the script we chose for this example job. Because the example creates a new
table in the targeted databases, each target db needs the proper permissions to successfully run.
To create the required job credentials (in the job database), run the following script:
# in the master database (target server)
# create the master user login, master user, and job user login
$params = @{
'database' = 'master'
'serverInstance' = $targetServer.ServerName + '.database.windows.net'
'username' = $adminLogin
'password' = $adminPassword
'outputSqlErrors' = $true
'query' = 'CREATE LOGIN masteruser WITH PASSWORD=''password!123'''
}
Invoke-SqlCmd @params
$params.query = "CREATE USER masteruser FROM LOGIN masteruser"
Invoke-SqlCmd @params
$params.query = 'CREATE LOGIN jobuser WITH PASSWORD=''password!123'''
Invoke-SqlCmd @params

# for each target database


# create the jobuser from jobuser login and check permission for script execution
$targetDatabases = @( $db1.DatabaseName, $Db2.DatabaseName )
$createJobUserScript = "CREATE USER jobuser FROM LOGIN jobuser"
$grantAlterSchemaScript = "GRANT ALTER ON SCHEMA::dbo TO jobuser"
$grantCreateScript = "GRANT CREATE TABLE TO jobuser"

$targetDatabases | % {
$params.database = $_
$params.query = $createJobUserScript
Invoke-SqlCmd @params
$params.query = $grantAlterSchemaScript
Invoke-SqlCmd @params
$params.query = $grantCreateScript
Invoke-SqlCmd @params
}

# create job credential in Job database for master user


Write-Output "Creating job credentials..."
$loginPasswordSecure = (ConvertTo-SecureString -String 'password!123' -AsPlainText -Force)

$masterCred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList "masteruser",


$loginPasswordSecure
$masterCred = $jobAgent | New-AzSqlElasticJobCredential -Name "masteruser" -Credential $masterCred

$jobCred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList "jobuser",


$loginPasswordSecure
$jobCred = $jobAgent | New-AzSqlElasticJobCredential -Name "jobuser" -Credential $jobCred

Define the target databases to run the job against


A target group defines the set of one or more databases a job step will execute on.
The following snippet creates two target groups: serverGroup, and serverGroupExcludingDb2. serverGroup
targets all databases that exist on the server at the time of execution, and serverGroupExcludingDb2 targets all
databases on the server, except targetDb2:
Write-Output "Creating test target groups..."
# create ServerGroup target group
$serverGroup = $jobAgent | New-AzSqlElasticJobTargetGroup -Name 'ServerGroup'
$serverGroup | Add-AzSqlElasticJobTarget -ServerName $targetServerName -RefreshCredentialName
$masterCred.CredentialName

# create ServerGroup with an exclusion of db2


$serverGroupExcludingDb2 = $jobAgent | New-AzSqlElasticJobTargetGroup -Name 'ServerGroupExcludingDb2'
$serverGroupExcludingDb2 | Add-AzSqlElasticJobTarget -ServerName $targetServerName -RefreshCredentialName
$masterCred.CredentialName
$serverGroupExcludingDb2 | Add-AzSqlElasticJobTarget -ServerName $targetServerName -Database
$db2.DatabaseName -Exclude

Create a job and steps


This example defines a job and two job steps for the job to run. The first job step (step1) creates a new table
(Step1Table) in every database in target group ServerGroup. The second job step (step2) creates a new table
(Step2Table) in every database except for TargetDb2, because the target group defined previously specified to
exclude it.

Write-Output "Creating a new job..."


$jobName = "Job1"
$job = $jobAgent | New-AzSqlElasticJob -Name $jobName -RunOnce
$job

Write-Output "Creating job steps..."


$sqlText1 = "IF NOT EXISTS (SELECT * FROM sys.tables WHERE object_id = object_id('Step1Table')) CREATE TABLE
[dbo].[Step1Table]([TestId] [int] NOT NULL);"
$sqlText2 = "IF NOT EXISTS (SELECT * FROM sys.tables WHERE object_id = object_id('Step2Table')) CREATE TABLE
[dbo].[Step2Table]([TestId] [int] NOT NULL);"

$job | Add-AzSqlElasticJobStep -Name "step1" -TargetGroupName $serverGroup.TargetGroupName -CredentialName


$jobCred.CredentialName -CommandText $sqlText1
$job | Add-AzSqlElasticJobStep -Name "step2" -TargetGroupName $serverGroupExcludingDb2.TargetGroupName -
CredentialName $jobCred.CredentialName -CommandText $sqlText2

Run the job


To start the job immediately, run the following command:

Write-Output "Start a new execution of the job..."


$jobExecution = $job | Start-AzSqlElasticJob
$jobExecution

After successful completion you should see two new tables in TargetDb1, and only one new table in TargetDb2:
You can also schedule the job to run later. To schedule a job to run at a specific time, run the following command:

# run every hour starting from now


$job | Set-AzSqlElasticJob -IntervalType Hour -IntervalCount 1 -StartTime (Get-Date) -Enable

Monitor status of job executions


The following snippets get job execution details:

# get the latest 10 executions run


$jobAgent | Get-AzSqlElasticJobExecution -Count 10

# get the job step execution details


$jobExecution | Get-AzSqlElasticJobStepExecution

# get the job target execution details


$jobExecution | Get-AzSqlElasticJobTargetExecution -Count 2

The following table lists the possible job execution states:

STAT E DESC RIP T IO N

Created The job execution was just created and is not yet in progress.

InProgress The job execution is currently in progress.

WaitingForRetr y The job execution wasn't able to complete its action and is
waiting to retry.
STAT E DESC RIP T IO N

Succeeded The job execution has completed successfully.

SucceededWithSkipped The job execution has completed successfully, but some of its
children were skipped.

Failed The job execution has failed and exhausted its retries.

TimedOut The job execution has timed out.

Canceled The job execution was canceled.

Skipped The job execution was skipped because another execution of


the same job step was already running on the same target.

WaitingForChildJobExecutions The job execution is waiting for its child executions to


complete.

Clean up resources
Delete the Azure resources created in this tutorial by deleting the resource group.

TIP
If you plan to continue to work with these jobs, you do not clean up the resources created in this article.

Remove-AzResourceGroup -ResourceGroupName $resourceGroupName

Next steps
In this tutorial, you ran a Transact-SQL script against a set of databases. You learned how to do the following
tasks:
Create an Elastic Job agent
Create job credentials so that jobs can execute scripts on its targets
Define the targets (servers, elastic pools, databases, shard maps) you want to run the job against
Create database scoped credentials in the target databases so the agent connect and execute jobs
Create a job
Add a job step to the job
Start an execution of the job
Monitor the job
Manage Elastic Jobs using Transact-SQL
Use Transact-SQL (T-SQL) to create and manage
Elastic Database Jobs (preview)
7/12/2022 • 41 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article provides many example scenarios to get started working with Elastic Jobs using T-SQL.
The examples use the stored procedures and views available in the job database.
Transact-SQL (T-SQL) is used to create, configure, execute, and manage jobs. Creating the Elastic Job agent is not
supported in T-SQL, so you must first create an Elastic Job agent using the portal, or PowerShell.

Create a credential for job execution


The credential is used to connect to your target databases for script execution. The credential needs appropriate
permissions, on the databases specified by the target group, to successfully execute the script. When using a
logical SQL server and/or pool target group member, it is highly suggested to create a credential for use to
refresh the credential prior to expansion of the server and/or pool at time of job execution. The database scoped
credential is created on the job agent database. The same credential must be used to Create a Login and Create
a User from Login to grant the Login Database Permissions on the target databases.

--Connect to the new job database specified when creating the Elastic Job agent

-- Create a database master key if one does not already exist, using your own password.
CREATE MASTER KEY ENCRYPTION BY PASSWORD='<EnterStrongPasswordHere>';

-- Create two database scoped credentials.


-- The credential to connect to the Azure SQL logical server, to execute jobs
CREATE DATABASE SCOPED CREDENTIAL job_credential WITH IDENTITY = 'job_credential',
SECRET = '<EnterStrongPasswordHere>';
GO
-- The credential to connect to the Azure SQL logical server, to refresh the database metadata in server
CREATE DATABASE SCOPED CREDENTIAL refresh_credential WITH IDENTITY = 'refresh_credential',
SECRET = '<EnterStrongPasswordHere>';
GO

Create a target group (servers)


The following example shows how to execute a job against all databases in a server.
Connect to the job database and run the following command:
-- Connect to the job database specified when creating the job agent

-- Add a target group containing server(s)


EXEC jobs.sp_add_target_group 'ServerGroup1';

-- Add a server target member


EXEC jobs.sp_add_target_group_member
@target_group_name = 'ServerGroup1',
@target_type = 'SqlServer',
@refresh_credential_name = 'refresh_credential', --credential required to refresh the databases in a server
@server_name = 'server1.database.windows.net';

--View the recently created target group and target group members
SELECT * FROM jobs.target_groups WHERE target_group_name='ServerGroup1';
SELECT * FROM jobs.target_group_members WHERE target_group_name='ServerGroup1';

Exclude an individual database


The following example shows how to execute a job against all databases in an server, except for the database
named MappingDB.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

-- Add a target group containing server(s)


EXEC [jobs].sp_add_target_group N'ServerGroup';
GO

-- Add a server target member


EXEC [jobs].sp_add_target_group_member
@target_group_name = N'ServerGroup',
@target_type = N'SqlServer',
@refresh_credential_name = N'refresh_credential', --credential required to refresh the databases in a server
@server_name = N'London.database.windows.net';
GO

-- Add a server target member


EXEC [jobs].sp_add_target_group_member
@target_group_name = N'ServerGroup',
@target_type = N'SqlServer',
@refresh_credential_name = N'refresh_credential', --credential required to refresh the databases in a server
@server_name = 'server2.database.windows.net';
GO

--Exclude a database target member from the server target group


EXEC [jobs].sp_add_target_group_member
@target_group_name = N'ServerGroup',
@membership_type = N'Exclude',
@target_type = N'SqlDatabase',
@server_name = N'server1.database.windows.net',
@database_name = N'MappingDB';
GO

--View the recently created target group and target group members
SELECT * FROM [jobs].target_groups WHERE target_group_name = N'ServerGroup';
SELECT * FROM [jobs].target_group_members WHERE target_group_name = N'ServerGroup';

Create a target group (pools)


The following example shows how to target all the databases in one or more elastic pools.
Connect to the job database and run the following command:
--Connect to the job database specified when creating the job agent

-- Add a target group containing pool(s)


EXEC jobs.sp_add_target_group 'PoolGroup';

-- Add an elastic pool(s) target member


EXEC jobs.sp_add_target_group_member
@target_group_name = 'PoolGroup',
@target_type = 'SqlElasticPool',
@refresh_credential_name = 'refresh_credential', --credential required to refresh the databases in a server
@server_name = 'server1.database.windows.net',
@elastic_pool_name = 'ElasticPool-1';

-- View the recently created target group and target group members
SELECT * FROM jobs.target_groups WHERE target_group_name = N'PoolGroup';
SELECT * FROM jobs.target_group_members WHERE target_group_name = N'PoolGroup';

Deploy new schema to many databases


The following example shows how to deploy new schema to all databases.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

--Add job for create table


EXEC jobs.sp_add_job @job_name = 'CreateTableTest', @description = 'Create Table Test';

-- Add job step for create table


EXEC jobs.sp_add_jobstep @job_name = 'CreateTableTest',
@command = N'IF NOT EXISTS (SELECT * FROM sys.tables WHERE object_id = object_id(''Test''))
CREATE TABLE [dbo].[Test]([TestId] [int] NOT NULL);',
@credential_name = 'job_credential',
@target_group_name = 'PoolGroup';

Data collection using built-in parameters


In many data collection scenarios, it can be useful to include some of these scripting variables to help post-
process the results of the job.
$( job_name)
$( job_id)
$( job_version)
$(step_id)
$(step_name)
$( job_execution_id)
$( job_execution_create_time)
$(target_group_name)
For example, to group all results from the same job execution together, use the $( job_execution_id) as shown in
the following command:

@command= N' SELECT DB_NAME() DatabaseName, $(job_execution_id) AS job_execution_id, * FROM


sys.dm_db_resource_stats WHERE end_time > DATEADD(mi, -20, GETDATE());'

Monitor database performance


The following example creates a new job to collect performance data from multiple databases.
By default, the job agent will create the output table to store returned results. Therefore, the database principal
associated with the output credential must at a minimum have the following permissions: CREATE TABLE on the
database, ALTER , SELECT , INSERT , DELETE on the output table or its schema, and SELECT on the sys.indexes
catalog view.
If you want to manually create the table ahead of time, then it needs to have the following properties:
1. Columns with the correct name and data types for the result set.
2. Additional column for internal_execution_id with the data type of uniqueidentifier.
3. A nonclustered index named IX_<TableName>_Internal_Execution_ID on the internal_execution_id column.
4. All permissions listed above except for CREATE TABLE permission on the database.

Connect to the job database and run the following commands:


--Connect to the job database specified when creating the job agent

-- Add a job to collect perf results


EXEC jobs.sp_add_job @job_name ='ResultsJob', @description='Collection Performance data from all customers'

-- Add a job step w/ schedule to collect results


EXEC jobs.sp_add_jobstep
@job_name = 'ResultsJob',
@command = N' SELECT DB_NAME() DatabaseName, $(job_execution_id) AS job_execution_id, * FROM
sys.dm_db_resource_stats WHERE end_time > DATEADD(mi, -20, GETDATE());',
@credential_name = 'job_credential',
@target_group_name = 'PoolGroup',
@output_type = 'SqlDatabase',
@output_credential_name = 'job_credential',
@output_server_name = 'server1.database.windows.net',
@output_database_name = '<resultsdb>',
@output_table_name = '<resultstable>';

--Create a job to monitor pool performance

--Connect to the job database specified when creating the job agent

-- Add a target group containing Elastic Job database


EXEC jobs.sp_add_target_group 'ElasticJobGroup';

-- Add a server target member


EXEC jobs.sp_add_target_group_member
@target_group_name = 'ElasticJobGroup',
@target_type = 'SqlDatabase',
@server_name = 'server1.database.windows.net',
@database_name = 'master';

-- Add a job to collect perf results


EXEC jobs.sp_add_job
@job_name = 'ResultsPoolsJob',
@description = 'Demo: Collection Performance data from all pools',
@schedule_interval_type = 'Minutes',
@schedule_interval_count = 15;

-- Add a job step w/ schedule to collect results


EXEC jobs.sp_add_jobstep
@job_name='ResultsPoolsJob',
@command=N'declare @now datetime
DECLARE @startTime datetime
DECLARE @endTime datetime
DECLARE @poolLagMinutes datetime
DECLARE @poolStartTime datetime
DECLARE @poolEndTime datetime
SELECT @now = getutcdate ()
SELECT @startTime = dateadd(minute, -15, @now)
SELECT @endTime = @now
SELECT @poolStartTime = dateadd(minute, -30, @startTime)
SELECT @poolEndTime = dateadd(minute, -30, @endTime)

SELECT elastic_pool_name , end_time, elastic_pool_dtu_limit, avg_cpu_percent, avg_data_io_percent,


avg_log_write_percent, max_worker_percent, max_session_percent,
avg_storage_percent, elastic_pool_storage_limit_mb FROM sys.elastic_pool_resource_stats
WHERE end_time > @poolStartTime and end_time <= @poolEndTime;
',
@credential_name = 'job_credential',
@target_group_name = 'ElasticJobGroup',
@output_type = 'SqlDatabase',
@output_credential_name = 'job_credential',
@output_server_name = 'server1.database.windows.net',
@output_database_name = 'resultsdb',
@output_table_name = 'resultstable';
View job definitions
The following example shows how to view current job definitions.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

-- View all jobs


SELECT * FROM jobs.jobs;

-- View the steps of the current version of all jobs


SELECT js.* FROM jobs.jobsteps js
JOIN jobs.jobs j
ON j.job_id = js.job_id AND j.job_version = js.job_version;

-- View the steps of all versions of all jobs


SELECT * FROM jobs.jobsteps;

Begin unplanned execution of a job


The following example shows how to start a job immediately.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

-- Execute the latest version of a job


EXEC jobs.sp_start_job 'CreateTableTest';

-- Execute the latest version of a job and receive the execution id


declare @je uniqueidentifier;
exec jobs.sp_start_job 'CreateTableTest', @job_execution_id = @je output;
select @je;

select * from jobs.job_executions where job_execution_id = @je;

-- Execute a specific version of a job (e.g. version 1)


exec jobs.sp_start_job 'CreateTableTest', 1;

Schedule execution of a job


The following example shows how to schedule a job for future execution.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

EXEC jobs.sp_update_job
@job_name = 'ResultsJob',
@enabled=1,
@schedule_interval_type = 'Minutes',
@schedule_interval_count = 15;

Monitor job execution status


The following example shows how to view execution status details for all jobs.
Connect to the job database and run the following command:
--Connect to the job database specified when creating the job agent

--View top-level execution status for the job named 'ResultsPoolJob'


SELECT * FROM jobs.job_executions
WHERE job_name = 'ResultsPoolsJob' and step_id IS NULL
ORDER BY start_time DESC;

--View all top-level execution status for all jobs


SELECT * FROM jobs.job_executions WHERE step_id IS NULL
ORDER BY start_time DESC;

--View all execution statuses for job named 'ResultsPoolsJob'


SELECT * FROM jobs.job_executions
WHERE job_name = 'ResultsPoolsJob'
ORDER BY start_time DESC;

-- View all active executions


SELECT * FROM jobs.job_executions
WHERE is_active = 1
ORDER BY start_time DESC;

Cancel a job
The following example shows how to cancel a job.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

-- View all active executions to determine job execution id


SELECT * FROM jobs.job_executions
WHERE is_active = 1 AND job_name = 'ResultPoolsJob'
ORDER BY start_time DESC;
GO

-- Cancel job execution with the specified job execution id


EXEC jobs.sp_stop_job '01234567-89ab-cdef-0123-456789abcdef';

Delete old job history


The following example shows how to delete job history prior to a specific date.
Connect to the job database and run the following command:

--Connect to the job database specified when creating the job agent

-- Delete history of a specific job's executions older than the specified date
EXEC jobs.sp_purge_jobhistory @job_name='ResultPoolsJob', @oldest_date='2016-07-01 00:00:00';

--Note: job history is automatically deleted if it is >45 days old

Delete a job and all its job history


The following example shows how to delete a job and all related job history.
Connect to the job database and run the following command:
--Connect to the job database specified when creating the job agent

EXEC jobs.sp_delete_job @job_name='ResultsPoolsJob';

--Note: job history is automatically deleted if it is >45 days old

Job stored procedures


The following stored procedures are in the jobs database.

STO RED P RO C EDURE DESC RIP T IO N

sp_add_job Adds a new job.

sp_update_job Updates an existing job.

sp_delete_job Deletes an existing job.

sp_add_jobstep Adds a step to a job.

sp_update_jobstep Updates a job step.

sp_delete_jobstep Deletes a job step.

sp_start_job Starts executing a job.

sp_stop_job Stops a job execution.

sp_add_target_group Adds a target group.

sp_delete_target_group Deletes a target group.

sp_add_target_group_member Adds a database or group of databases to a target group.

sp_delete_target_group_member Removes a target group member from a target group.

sp_purge_jobhistory Removes the history records for a job.

sp_add_job
Adds a new job.
Syntax

[jobs].sp_add_job [ @job_name = ] 'job_name'


[ , [ @description = ] 'description' ]
[ , [ @enabled = ] enabled ]
[ , [ @schedule_interval_type = ] schedule_interval_type ]
[ , [ @schedule_interval_count = ] schedule_interval_count ]
[ , [ @schedule_start_time = ] schedule_start_time ]
[ , [ @schedule_end_time = ] schedule_end_time ]
[ , [ @job_id = ] job_id OUTPUT ]

Arguments
[ @job_name = ] 'job_name'
The name of the job. The name must be unique and cannot contain the percent (%) character. job_name is
nvarchar(128), with no default.
[ @description = ] 'description'
The description of the job. description is nvarchar(512), with a default of NULL. If description is omitted, an
empty string is used.
[ @enabled = ] enabled
Whether the job's schedule is enabled. Enabled is bit, with a default of 0 (disabled). If 0, the job is not enabled
and does not run according to its schedule; however, it can be run manually. If 1, the job will run according to its
schedule, and can also be run manually.
[ @schedule_inter val_type = ] schedule_interval_type
Value indicates when the job is to be executed. schedule_interval_type is nvarchar(50), with a default of Once,
and can be one of the following values:
'Once',
'Minutes',
'Hours',
'Days',
'Weeks',
'Months'
[ @schedule_inter val_count = ] schedule_interval_count
Number of schedule_interval_count periods to occur between each execution of the job.
schedule_interval_count is int, with a default of 1. The value must be greater than or equal to 1.
[ @schedule_star t_time = ] schedule_start_time
Date on which job execution can begin. schedule_start_time is DATETIME2, with the default of 0001-01-01
00:00:00.0000000.
[ @schedule_end_time = ] schedule_end_time
Date on which job execution can stop. schedule_end_time is DATETIME2, with the default of 9999-12-31
11:59:59.0000000.
[ @job_id = ] job_id OUTPUT
The job identification number assigned to the job if created successfully. job_id is an output variable of type
uniqueidentifier.
Return Code Values
0 (success) or 1 (failure)
Remarks
sp_add_job must be run from the job agent database specified when creating the job agent. After sp_add_job
has been executed to add a job, sp_add_jobstep can be used to add steps that perform the activities for the job.
The job's initial version number is 0, which will be incremented to 1 when the first step is added.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_update_job
Updates an existing job.
Syntax

[jobs].sp_update_job [ @job_name = ] 'job_name'


[ , [ @new_name = ] 'new_name' ]
[ , [ @description = ] 'description' ]
[ , [ @enabled = ] enabled ]
[ , [ @schedule_interval_type = ] schedule_interval_type ]
[ , [ @schedule_interval_count = ] schedule_interval_count ]
[ , [ @schedule_start_time = ] schedule_start_time ]
[ , [ @schedule_end_time = ] schedule_end_time ]

Arguments
[ @job_name = ] 'job_name'
The name of the job to be updated. job_name is nvarchar(128).
[ @new_name = ] 'new_name'
The new name of the job. new_name is nvarchar(128).
[ @description = ] 'description'
The description of the job. description is nvarchar(512).
[ @enabled = ] enabled
Specifies whether the job's schedule is enabled (1) or not enabled (0). Enabled is bit.
[ @schedule_inter val_type= ] schedule_interval_type
Value indicates when the job is to be executed. schedule_interval_type is nvarchar(50) and can be one of the
following values:
'Once',
'Minutes',
'Hours',
'Days',
'Weeks',
'Months'
[ @schedule_inter val_count= ] schedule_interval_count
Number of schedule_interval_count periods to occur between each execution of the job.
schedule_interval_count is int, with a default of 1. The value must be greater than or equal to 1.
[ @schedule_star t_time= ] schedule_start_time
Date on which job execution can begin. schedule_start_time is DATETIME2, with the default of 0001-01-01
00:00:00.0000000.
[ @schedule_end_time= ] schedule_end_time
Date on which job execution can stop. schedule_end_time is DATETIME2, with the default of 9999-12-31
11:59:59.0000000.
Return Code Values
0 (success) or 1 (failure)
Remarks
After sp_add_job has been executed to add a job, sp_add_jobstep can be used to add steps that perform the
activities for the job. The job's initial version number is 0, which will be incremented to 1 when the first step is
added.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_delete_job
Deletes an existing job.
Syntax

[jobs].sp_delete_job [ @job_name = ] 'job_name'


[ , [ @force = ] force ]

Arguments
[ @job_name = ] 'job_name'
The name of the job to be deleted. job_name is nvarchar(128).
[ @force = ] force
Specifies whether to delete if the job has any executions in progress and cancel all in-progress executions (1) or
fail if any job executions are in progress (0). force is bit.
Return Code Values
0 (success) or 1 (failure)
Remarks
Job history is automatically deleted when a job is deleted.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_jobstep
Adds a step to a job.
Syntax
[jobs].sp_add_jobstep [ @job_name = ] 'job_name'
[ , [ @step_id = ] step_id ]
[ , [ @step_name = ] step_name ]
[ , [ @command_type = ] 'command_type' ]
[ , [ @command_source = ] 'command_source' ]
, [ @command = ] 'command'
, [ @credential_name = ] 'credential_name'
, [ @target_group_name = ] 'target_group_name'
[ , [ @initial_retry_interval_seconds = ] initial_retry_interval_seconds ]
[ , [ @maximum_retry_interval_seconds = ] maximum_retry_interval_seconds ]
[ , [ @retry_interval_backoff_multiplier = ] retry_interval_backoff_multiplier ]
[ , [ @retry_attempts = ] retry_attempts ]
[ , [ @step_timeout_seconds = ] step_timeout_seconds ]
[ , [ @output_type = ] 'output_type' ]
[ , [ @output_credential_name = ] 'output_credential_name' ]
[ , [ @output_subscription_id = ] 'output_subscription_id' ]
[ , [ @output_resource_group_name = ] 'output_resource_group_name' ]
[ , [ @output_server_name = ] 'output_server_name' ]
[ , [ @output_database_name = ] 'output_database_name' ]
[ , [ @output_schema_name = ] 'output_schema_name' ]
[ , [ @output_table_name = ] 'output_table_name' ]
[ , [ @job_version = ] job_version OUTPUT ]
[ , [ @max_parallelism = ] max_parallelism ]

Arguments
[ @job_name = ] 'job_name'
The name of the job to which to add the step. job_name is nvarchar(128).
[ @step_id = ] step_id
The sequence identification number for the job step. Step identification numbers start at 1 and increment
without gaps. If an existing step already has this ID, then that step and all following steps will have their ID's
incremented so that this new step can be inserted into the sequence. If not specified, the step_id will be
automatically assigned to the last in the sequence of steps. step_id is an int.
[ @step_name = ] step_name
The name of the step. Must be specified, except for the first step of a job that (for convenience) has a default
name of 'JobStep'. step_name is nvarchar(128).
[ @command_type = ] 'command_type'
The type of command that is executed by this jobstep. command_type is nvarchar(50), with a default value of
TSql, meaning that the value of the @command_type parameter is a T-SQL script.
If specified, the value must be TSql.
[ @command_source = ] 'command_source'
The type of location where the command is stored. command_source is nvarchar(50), with a default value of
Inline, meaning that the value of the @command_source parameter is the literal text of the command.
If specified, the value must be Inline.
[ @command = ] 'command'
The command must be valid T-SQL script and is then executed by this job step. command is nvarchar(max), with
a default of NULL.
[ @credential_name = ] 'credential_name'
The name of the database scoped credential stored in this job control database that is used to connect to each of
the target databases within the target group when this step is executed. credential_name is nvarchar(128).
[ @target_group_name = ] 'target-group_name'
The name of the target group that contains the target databases that the job step will be executed on.
target_group_name is nvarchar(128).
[ @initial_retr y_inter val_seconds = ] initial_retry_interval_seconds
The delay before the first retry attempt, if the job step fails on the initial execution attempt.
initial_retry_interval_seconds is int, with default value of 1.
[ @maximum_retr y_inter val_seconds = ] maximum_retry_interval_seconds
The maximum delay between retry attempts. If the delay between retries would grow larger than this value, it is
capped to this value instead. maximum_retry_interval_seconds is int, with default value of 120.
[ @retr y_inter val_backoff_multiplier = ] retry_interval_backoff_multiplier
The multiplier to apply to the retry delay if multiple job step execution attempts fail. For example, if the first retry
had a delay of 5 second and the backoff multiplier is 2.0, then the second retry will have a delay of 10 seconds
and the third retry will have a delay of 20 seconds. retry_interval_backoff_multiplier is real, with default value of
2.0.
[ @retr y_attempts = ] retry_attempts
The number of times to retry execution if the initial attempt fails. For example, if the retry_attempts value is 10,
then there will be 1 initial attempt and 10 retry attempts, giving a total of 11 attempts. If the final retry attempt
fails, then the job execution will terminate with a lifecycle of Failed. retry_attempts is int, with default value of 10.
[ @step_timeout_seconds = ] step_timeout_seconds
The maximum amount of time allowed for the step to execute. If this time is exceeded, then the job execution
will terminate with a lifecycle of TimedOut. step_timeout_seconds is int, with default value of 43,200 seconds (12
hours).
[ @output_type = ] 'output_type'
If not null, the type of destination that the command's first result set is written to. output_type is nvarchar(50),
with a default of NULL.
If specified, the value must be SqlDatabase.
[ @output_credential_name = ] 'output_credential_name'
If not null, the name of the database scoped credential that is used to connect to the output destination
database. Must be specified if output_type equals SqlDatabase. output_credential_name is nvarchar(128), with a
default value of NULL.
[ @output_subscription_id = ] 'output_subscription_id'
Needs description.
[ @output_resource_group_name = ] 'output_resource_group_name'
Needs description.
[ @output_ser ver_name = ] 'output_server_name'
If not null, the fully qualified DNS name of the server that contains the output destination database. Must be
specified if output_type equals SqlDatabase. output_server_name is nvarchar(256), with a default of NULL.
[ @output_database_name = ] 'output_database_name'
If not null, the name of the database that contains the output destination table. Must be specified if output_type
equals SqlDatabase. output_database_name is nvarchar(128), with a default of NULL.
[ @output_schema_name = ] 'output_schema_name'
If not null, the name of the SQL schema that contains the output destination table. If output_type equals
SqlDatabase, the default value is dbo. output_schema_name is nvarchar(128).
[ @output_table_name = ] 'output_table_name'
If not null, the name of the table that the command's first result set will be written to. If the table doesn't already
exist, it will be created based on the schema of the returning result-set. Must be specified if output_type equals
SqlDatabase. output_table_name is nvarchar(128), with a default value of NULL.
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
[ @max_parallelism = ] max_parallelism OUTPUT
The maximum level of parallelism per elastic pool. If set, then the job step will be restricted to only run on a
maximum of that many databases per elastic pool. This applies to each elastic pool that is either directly
included in the target group or is inside a server that is included in the target group. max_parallelism is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
When sp_add_jobstep succeeds, the job's current version number is incremented. The next time the job is
executed, the new version will be used. If the job is currently executing, that execution will not contain the new
step.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_update_jobstep
Updates a job step.
Syntax

[jobs].sp_update_jobstep [ @job_name = ] 'job_name'


[ , [ @step_id = ] step_id ]
[ , [ @step_name = ] 'step_name' ]
[ , [ @new_id = ] new_id ]
[ , [ @new_name = ] 'new_name' ]
[ , [ @command_type = ] 'command_type' ]
[ , [ @command_source = ] 'command_source' ]
, [ @command = ] 'command'
, [ @credential_name = ] 'credential_name'
, [ @target_group_name = ] 'target_group_name'
[ , [ @initial_retry_interval_seconds = ] initial_retry_interval_seconds ]
[ , [ @maximum_retry_interval_seconds = ] maximum_retry_interval_seconds ]
[ , [ @retry_interval_backoff_multiplier = ] retry_interval_backoff_multiplier ]
[ , [ @retry_attempts = ] retry_attempts ]
[ , [ @step_timeout_seconds = ] step_timeout_seconds ]
[ , [ @output_type = ] 'output_type' ]
[ , [ @output_credential_name = ] 'output_credential_name' ]
[ , [ @output_server_name = ] 'output_server_name' ]
[ , [ @output_database_name = ] 'output_database_name' ]
[ , [ @output_schema_name = ] 'output_schema_name' ]
[ , [ @output_table_name = ] 'output_table_name' ]
[ , [ @job_version = ] job_version OUTPUT ]
[ , [ @max_parallelism = ] max_parallelism ]

Arguments
[ @job_name = ] 'job_name'
The name of the job to which the step belongs. job_name is nvarchar(128).
[ @step_id = ] step_id
The identification number for the job step to be modified. Either step_id or step_name must be specified. step_id
is an int.
[ @step_name = ] 'step_name'
The name of the step to be modified. Either step_id or step_name must be specified. step_name is nvarchar(128).
[ @new_id = ] new_id
The new sequence identification number for the job step. Step identification numbers start at 1 and increment
without gaps. If a step is reordered, then other steps will be automatically renumbered.
[ @new_name = ] 'new_name'
The new name of the step. new_name is nvarchar(128).
[ @command_type = ] 'command_type'
The type of command that is executed by this jobstep. command_type is nvarchar(50), with a default value of
TSql, meaning that the value of the @command_type parameter is a T-SQL script.
If specified, the value must be TSql.
[ @command_source = ] 'command_source'
The type of location where the command is stored. command_source is nvarchar(50), with a default value of
Inline, meaning that the value of the @command_source parameter is the literal text of the command.
If specified, the value must be Inline.
[ @command = ] 'command'
The command(s) must be valid T-SQL script and is then executed by this job step. command is nvarchar(max),
with a default of NULL.
[ @credential_name = ] 'credential_name'
The name of the database scoped credential stored in this job control database that is used to connect to each of
the target databases within the target group when this step is executed. credential_name is nvarchar(128).
[ @target_group_name = ] 'target-group_name'
The name of the target group that contains the target databases that the job step will be executed on.
target_group_name is nvarchar(128).
[ @initial_retr y_inter val_seconds = ] initial_retry_interval_seconds
The delay before the first retry attempt, if the job step fails on the initial execution attempt.
initial_retry_interval_seconds is int, with default value of 1.
[ @maximum_retr y_inter val_seconds = ] maximum_retry_interval_seconds
The maximum delay between retry attempts. If the delay between retries would grow larger than this value, it is
capped to this value instead. maximum_retry_interval_seconds is int, with default value of 120.
[ @retr y_inter val_backoff_multiplier = ] retry_interval_backoff_multiplier
The multiplier to apply to the retry delay if multiple job step execution attempts fail. For example, if the first retry
had a delay of 5 second and the backoff multiplier is 2.0, then the second retry will have a delay of 10 seconds
and the third retry will have a delay of 20 seconds. retry_interval_backoff_multiplier is real, with default value of
2.0.
[ @retr y_attempts = ] retry_attempts
The number of times to retry execution if the initial attempt fails. For example, if the retry_attempts value is 10,
then there will be 1 initial attempt and 10 retry attempts, giving a total of 11 attempts. If the final retry attempt
fails, then the job execution will terminate with a lifecycle of Failed. retry_attempts is int, with default value of 10.
[ @step_timeout_seconds = ] step_timeout_seconds
The maximum amount of time allowed for the step to execute. If this time is exceeded, then the job execution
will terminate with a lifecycle of TimedOut. step_timeout_seconds is int, with default value of 43,200 seconds (12
hours).
[ @output_type = ] 'output_type'
If not null, the type of destination that the command's first result set is written to. To reset the value of
output_type back to NULL, set this parameter's value to '' (empty string). output_type is nvarchar(50), with a
default of NULL.
If specified, the value must be SqlDatabase.
[ @output_credential_name = ] 'output_credential_name'
If not null, the name of the database scoped credential that is used to connect to the output destination
database. Must be specified if output_type equals SqlDatabase. To reset the value of output_credential_name
back to NULL, set this parameter's value to '' (empty string). output_credential_name is nvarchar(128), with a
default value of NULL.
[ @output_ser ver_name = ] 'output_server_name'
If not null, the fully qualified DNS name of the server that contains the output destination database. Must be
specified if output_type equals SqlDatabase. To reset the value of output_server_name back to NULL, set this
parameter's value to '' (empty string). output_server_name is nvarchar(256), with a default of NULL.
[ @output_database_name = ] 'output_database_name'
If not null, the name of the database that contains the output destination table. Must be specified if output_type
equals SqlDatabase. To reset the value of output_database_name back to NULL, set this parameter's value to ''
(empty string). output_database_name is nvarchar(128), with a default of NULL.
[ @output_schema_name = ] 'output_schema_name'
If not null, the name of the SQL schema that contains the output destination table. If output_type equals
SqlDatabase, the default value is dbo. To reset the value of output_schema_name back to NULL, set this
parameter's value to '' (empty string). output_schema_name is nvarchar(128).
[ @output_table_name = ] 'output_table_name'
If not null, the name of the table that the command's first result set will be written to. If the table doesn't already
exist, it will be created based on the schema of the returning result-set. Must be specified if output_type equals
SqlDatabase. To reset the value of output_server_name back to NULL, set this parameter's value to '' (empty
string). output_table_name is nvarchar(128), with a default value of NULL.
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
[ @max_parallelism = ] max_parallelism OUTPUT
The maximum level of parallelism per elastic pool. If set, then the job step will be restricted to only run on a
maximum of that many databases per elastic pool. This applies to each elastic pool that is either directly
included in the target group or is inside a server that is included in the target group. To reset the value of
max_parallelism back to null, set this parameter's value to -1. max_parallelism is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
Any in-progress executions of the job will not be affected. When sp_update_jobstep succeeds, the job's version
number is incremented. The next time the job is executed, the new version will be used.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users
sp_delete_jobstep
Removes a job step from a job.
Syntax

[jobs].sp_delete_jobstep [ @job_name = ] 'job_name'


[ , [ @step_id = ] step_id ]
[ , [ @step_name = ] 'step_name' ]
[ , [ @job_version = ] job_version OUTPUT ]

Arguments
[ @job_name = ] 'job_name'
The name of the job from which the step will be removed. job_name is nvarchar(128), with no default.
[ @step_id = ] step_id
The identification number for the job step to be deleted. Either step_id or step_name must be specified. step_id is
an int.
[ @step_name = ] 'step_name'
The name of the step to be deleted. Either step_id or step_name must be specified. step_name is nvarchar(128).
[ @job_version = ] job_version OUTPUT
Output parameter that will be assigned the new job version number. job_version is int.
Return Code Values
0 (success) or 1 (failure)
Remarks
Any in-progress executions of the job will not be affected. When sp_update_jobstep succeeds, the job's version
number is incremented. The next time the job is executed, the new version will be used.
The other job steps will be automatically renumbered to fill the gap left by the deleted job step.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_start_job
Starts executing a job.
Syntax

[jobs].sp_start_job [ @job_name = ] 'job_name'


[ , [ @job_execution_id = ] job_execution_id OUTPUT ]

Arguments
[ @job_name = ] 'job_name'
The name of the job from which the step will be removed. job_name is nvarchar(128), with no default.
[ @job_execution_id = ] job_execution_id OUTPUT
Output parameter that will be assigned the job execution's ID. job_version is uniqueidentifier.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_stop_job
Stops a job execution.
Syntax

[jobs].sp_stop_job [ @job_execution_id = ] ' job_execution_id '

Arguments
[ @job_execution_id = ] job_execution_id
The identification number of the job execution to stop. job_execution_id is uniqueidentifier, with default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_target_group
Adds a target group.
Syntax

[jobs].sp_add_target_group [ @target_group_name = ] 'target_group_name'


[ , [ @target_group_id = ] target_group_id OUTPUT ]

Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to create. target_group_name is nvarchar(128), with no default.
[ @target_group_id = ] target_group_id OUTPUT The target group identification number assigned to the job if
created successfully. target_group_id is an output variable of type uniqueidentifier, with a default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_delete_target_group
Deletes a target group.
Syntax

[jobs].sp_delete_target_group [ @target_group_name = ] 'target_group_name'

Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to delete. target_group_name is nvarchar(128), with no default.
Return Code Values
0 (success) or 1 (failure)
Remarks
None.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
sp_add_target_group_member
Adds a database or group of databases to a target group.
Syntax

[jobs].sp_add_target_group_member [ @target_group_name = ] 'target_group_name'


[ @membership_type = ] 'membership_type' ]
[ , [ @target_type = ] 'target_type' ]
[ , [ @refresh_credential_name = ] 'refresh_credential_name' ]
[ , [ @server_name = ] 'server_name' ]
[ , [ @database_name = ] 'database_name' ]
[ , [ @elastic_pool_name = ] 'elastic_pool_name' ]
[ , [ @shard_map_name = ] 'shard_map_name' ]
[ , [ @target_id = ] 'target_id' OUTPUT ]

Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group to which the member will be added. target_group_name is nvarchar(128), with no
default.
[ @membership_type = ] 'membership_type'
Specifies if the target group member will be included or excluded. target_group_name is nvarchar(128), with
default of 'Include'. Valid values for membership_type are 'Include' or 'Exclude'.
[ @target_type = ] 'target_type'
The type of target database or collection of databases including all databases in a server, all databases in an
Elastic pool, all databases in a shard map, or an individual database. target_type is nvarchar(128), with no
default. Valid values for target_type are 'SqlServer', 'SqlElasticPool', 'SqlDatabase', or 'SqlShardMap'.
[ @refresh_credential_name = ] 'refresh_credential_name'
The name of the database scoped credential. refresh_credential_name is nvarchar(128), with no default.
[ @ser ver_name = ] 'server_name'
The name of the server that should be added to the specified target group. server_name should be specified
when target_type is 'SqlServer'. server_name is nvarchar(128), with no default.
[ @database_name = ] 'database_name'
The name of the database that should be added to the specified target group. database_name should be
specified when target_type is 'SqlDatabase'. database_name is nvarchar(128), with no default.
[ @elastic_pool_name = ] 'elastic_pool_name'
The name of the Elastic pool that should be added to the specified target group. elastic_pool_name should be
specified when target_type is 'SqlElasticPool'. elastic_pool_name is nvarchar(128), with no default.
[ @shard_map_name = ] 'shard_map_name'
The name of the shard map pool that should be added to the specified target group. elastic_pool_name should
be specified when target_type is 'SqlShardMap'. shard_map_name is nvarchar(128), with no default.
[ @target_id = ] target_group_id OUTPUT
The target identification number assigned to the target group member if created added to the target group.
target_id is an output variable of type uniqueidentifier, with a default of NULL. Return Code Values 0 (success) or
1 (failure)
Remarks
A job executes on all single databases within a server or in an elastic pool at time of execution, when a server or
elastic pool is included in the target group.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example adds all the databases in the London and NewYork servers to the group Servers
Maintaining Customer Information. You must connect to the jobs database specified when creating the job
agent, in this case ElasticJobs.
--Connect to the jobs database specified when creating the job agent
USE ElasticJobs;
GO

-- Add a target group containing server(s)


EXEC jobs.sp_add_target_group @target_group_name = N'Servers Maintaining Customer Information';
GO

-- Add a server target member


EXEC jobs.sp_add_target_group_member
@target_group_name = N'Servers Maintaining Customer Information',
@target_type = N'SqlServer',
@refresh_credential_name=N'refresh_credential', --credential required to refresh the databases in server
@server_name=N'London.database.windows.net';
GO

-- Add a server target member


EXEC jobs.sp_add_target_group_member
@target_group_name = N'Servers Maintaining Customer Information',
@target_type = N'SqlServer',
@refresh_credential_name=N'refresh_credential', --credential required to refresh the databases in server
@server_name=N'NewYork.database.windows.net';
GO

--View the recently added members to the target group


SELECT * FROM [jobs].target_group_members WHERE target_group_name= N'Servers Maintaining Customer
Information';
GO

sp_delete_target_group_member
Removes a target group member from a target group.
Syntax

[jobs].sp_delete_target_group_member [ @target_group_name = ] 'target_group_name'


[ , [ @target_id = ] 'target_id']

Arguments
[ @target_group_name = ] 'target_group_name'
The name of the target group from which to remove the target group member. target_group_name is
nvarchar(128), with no default.
[ @target_id = ] target_id
The target identification number assigned to the target group member to be removed. target_id is a
uniqueidentifier, with a default of NULL.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example removes the London server from the group Servers Maintaining Customer Information.
You must connect to the jobs database specified when creating the job agent, in this case ElasticJobs.

--Connect to the jobs database specified when creating the job agent
USE ElasticJobs ;
GO

-- Retrieve the target_id for a target_group_members


declare @tid uniqueidentifier
SELECT @tid = target_id FROM [jobs].target_group_members WHERE target_group_name = 'Servers Maintaining
Customer Information' and server_name = 'London.database.windows.net';

-- Remove a target group member of type server


EXEC jobs.sp_delete_target_group_member
@target_group_name = N'Servers Maintaining Customer Information',
@target_id = @tid;
GO

sp_purge_jobhistory
Removes the history records for a job.
Syntax

[jobs].sp_purge_jobhistory [ @job_name = ] 'job_name'


[ , [ @job_id = ] job_id ]
[ , [ @oldest_date = ] oldest_date []

Arguments
[ @job_name = ] 'job_name'
The name of the job for which to delete the history records. job_name is nvarchar(128), with a default of NULL.
Either job_id or job_name must be specified, but both cannot be specified.
[ @job_id = ] job_id
The job identification number of the job for the records to be deleted. job_id is uniqueidentifier, with a default of
NULL. Either job_id or job_name must be specified, but both cannot be specified.
[ @oldest_date = ] oldest_date
The oldest record to retain in the history. oldest_date is DATETIME2, with a default of NULL. When oldest_date is
specified, sp_purge_jobhistory only removes records that are older than the value specified.
Return Code Values
0 (success) or 1 (failure)
Remarks
Target groups provide an easy way to target a job at a collection of databases.
Permissions
By default, members of the sysadmin fixed server role can execute this stored procedure. They restrict a user to
just be able to monitor jobs, you can grant the user to be part of the following database role in the job agent
database specified when creating the job agent:
jobs_reader
For details about the permissions of these roles, see the Permission section in this document. Only members of
sysadmin can use this stored procedure to edit the attributes of jobs that are owned by other users.
Examples
The following example adds all the databases in the London and NewYork servers to the group Servers
Maintaining Customer Information. You must connect to the jobs database specified when creating the job
agent, in this case ElasticJobs.

--Connect to the jobs database specified when creating the job agent

EXEC sp_delete_target_group_member
@target_group_name = N'Servers Maintaining Customer Information',
@server_name = N'London.database.windows.net';
GO

Job views
The following views are available in the jobs database.

VIEW DESC RIP T IO N

job_executions Shows job execution history.

jobs Shows all jobs.

job_versions Shows all job versions.

jobsteps Shows all steps in the current version of each job.

jobstep_versions Shows all steps in all versions of each job.

target_groups Shows all target groups.

target_group_members Shows all members of all target groups.

job_executions view
[ jobs].[ job_executions]
Shows job execution history.

C O L UM N N A M E DATA T Y P E DESC RIP T IO N

job_execution_id uniqueidentifier Unique ID of an instance of a job


execution.

job_name nvarchar(128) Name of the job.

job_id uniqueidentifier Unique ID of the job.

job_version int Version of the job (automatically


updated each time the job is modified).

step_id int Unique (for this job) identifier for the


step. NULL indicates this is the parent
job execution.

is_active bit Indicates whether information is active


or inactive. 1 indicates active jobs, and
0 indicates inactive.
C O L UM N N A M E DATA T Y P E DESC RIP T IO N

lifecycle nvarchar(50) Value indicating the status of the


job:'Created', 'In Progress', 'Failed',
'Succeeded', 'Skipped',
'SucceededWithSkipped'

create_time datetime2(7) Date and time the job was created.

star t_time datetime2(7) Date and time the job started


execution. NULL if the job has not yet
been executed.

end_time datetime2(7) Date and time the job finished


execution. NULL if the job has not yet
been executed or has not yet
completed execution.

current_attempts int Number of times the step was retried.


Parent job will be 0, child job
executions will be 1 or greater based
on the execution policy.

current_attempt_star t_time datetime2(7) Date and time the job started


execution. NULL indicates this is the
parent job execution.

last_message nvarchar(max) Job or step history message.

target_type nvarchar(128) Type of target database or collection of


databases including all databases in a
server, all databases in an Elastic pool
or a database. Valid values for
target_type are 'SqlServer',
'SqlElasticPool' or 'SqlDatabase'. NULL
indicates this is the parent job
execution.

target_id uniqueidentifier Unique ID of the target group


member. NULL indicates this is the
parent job execution.

target_group_name nvarchar(128) Name of the target group. NULL


indicates this is the parent job
execution.

target_ser ver_name nvarchar(256) Name of the server contained in the


target group. Specified only if
target_type is 'SqlServer'. NULL
indicates this is the parent job
execution.

target_database_name nvarchar(128) Name of the database contained in the


target group. Specified only when
target_type is 'SqlDatabase'. NULL
indicates this is the parent job
execution.
jobs view
[ jobs].[ jobs]
Shows all jobs.

C O L UM N N A M E DATA T Y P E DESC RIP T IO N

job_name nvarchar(128) Name of the job.

job_id uniqueidentifier Unique ID of the job.

job_version int Version of the job (automatically


updated each time the job is modified).

description nvarchar(512) Description for the job. Enabled bit:


Indicates whether the job is enabled or
disabled. 1 indicates enabled jobs, and
0 indicates disabled jobs.

schedule_inter val_type nvarchar(50) Value indicating when the job is to be


executed:'Once', 'Minutes', 'Hours',
'Days', 'Weeks', 'Months'

schedule_inter val_count int Number of schedule_interval_type


periods to occur between each
execution of the job.

schedule_star t_time datetime2(7) Date and time the job was last started
execution.

schedule_end_time datetime2(7) Date and time the job was last


completed execution.

job_versions view
[ jobs].[ job_versions]
Shows all job versions.

C O L UM N N A M E DATA T Y P E DESC RIP T IO N

job_name nvarchar(128) Name of the job.

job_id uniqueidentifier Unique ID of the job.

job_version int Version of the job (automatically


updated each time the job is modified).

jobsteps view
[ jobs].[ jobsteps]
Shows all steps in the current version of each job.

C O L UM N N A M E DATA T Y P E DESC RIP T IO N

job_name nvarchar(128) Name of the job.


C O L UM N N A M E DATA T Y P E DESC RIP T IO N

job_id uniqueidentifier Unique ID of the job.

job_version int Version of the job (automatically


updated each time the job is modified).

step_id int Unique (for this job) identifier for the


step.

step_name nvarchar(128) Unique (for this job) name for the step.

command_type nvarchar(50) Type of command to execute in the job


step. For v1, value must equal to and
defaults to 'TSql'.

command_source nvarchar(50) Location of the command. For v1,


'Inline' is the default and only accepted
value.

command nvarchar(max) The commands to be executed by


Elastic jobs through command_type.

credential_name nvarchar(128) Name of the database scoped


credential used to execution the job.

target_group_name nvarchar(128) Name of the target group.

target_group_id uniqueidentifier Unique ID of the target group.

initial_retr y_inter val_seconds int The delay before the first retry
attempt. Default value is 1.

maximum_retr y_inter val_seconds int The maximum delay between retry


attempts. If the delay between retries
would grow larger than this value, it is
capped to this value instead. Default
value is 120.

retr y_inter val_backoff_multiplier real The multiplier to apply to the retry


delay if multiple job step execution
attempts fail. Default value is 2.0.

retr y_attempts int The number of retry attempts to use if


this step fails. Default of 10, which
indicates no retry attempts.

step_timeout_seconds int The amount of time in minutes


between retry attempts. The default is
0, which indicates a 0-minute interval.

output_type nvarchar(11) Location of the command. In the


current preview, 'Inline' is the default
and only accepted value.
C O L UM N N A M E DATA T Y P E DESC RIP T IO N

output_credential_name nvarchar(128) Name of the credentials to be used to


connect to the destination server to
store the results set.

output_subscription_id uniqueidentifier Unique ID of the subscription of the


destination server\database for the
results set from the query execution.

output_resource_group_name nvarchar(128) Resource group name where the


destination server resides.

output_ser ver_name nvarchar(256) Name of the destination server for the


results set.

output_database_name nvarchar(128) Name of the destination database for


the results set.

output_schema_name nvarchar(max) Name of the destination schema.


Defaults to dbo, if not specified.

output_table_name nvarchar(max) Name of the table to store the results


set from the query results. Table will be
created automatically based on the
schema of the results set if it doesn't
already exist. Schema must match the
schema of the results set.

max_parallelism int The maximum number of databases


per elastic pool that the job step will
be run on at a time. The default is
NULL, meaning no limit.

jobstep_versions view
[ jobs].[ jobstep_versions]
Shows all steps in all versions of each job. The schema is identical to jobsteps.
target_groups view
[ jobs].[target_groups]
Lists all target groups.

C O L UM N N A M E DATA T Y P E DESC RIP T IO N

target_group_name nvarchar(128) The name of the target group, a


collection of databases.

target_group_id uniqueidentifier Unique ID of the target group.

target_group_members view
[ jobs].[target_group_members]
Shows all members of all target groups.
C O L UM N N A M E DATA T Y P E DESC RIP T IO N

target_group_name nvarchar(128 The name of the target group, a


collection of databases.

target_group_id uniqueidentifier Unique ID of the target group.

membership_type int Specifies if the target group member is


included or excluded in the target
group. Valid values for
target_group_name are 'Include' or
'Exclude'.

target_type nvarchar(128) Type of target database or collection of


databases including all databases in a
server, all databases in an Elastic pool
or a database. Valid values for
target_type are 'SqlServer',
'SqlElasticPool', 'SqlDatabase', or
'SqlShardMap'.

target_id uniqueidentifier Unique ID of the target group


member.

refresh_credential_name nvarchar(128) Name of the database scoped


credential used to connect to the
target group member.

subscription_id uniqueidentifier Unique ID of the subscription.

resource_group_name nvarchar(128) Name of the resource group in which


the target group member resides.

ser ver_name nvarchar(128) Name of the server contained in the


target group. Specified only if
target_type is 'SqlServer'.

database_name nvarchar(128) Name of the database contained in the


target group. Specified only when
target_type is 'SqlDatabase'.

elastic_pool_name nvarchar(128) Name of the Elastic pool contained in


the target group. Specified only when
target_type is 'SqlElasticPool'.

shard_map_name nvarchar(128) Name of the shard maps contained in


the target group. Specified only when
target_type is 'SqlShardMap'.

Resources
Transact-SQL Syntax Conventions

Next steps
Create and manage Elastic Jobs using PowerShell
Authorization and Permissions
Migrate to the new Elastic Database jobs (preview)
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


An upgraded version of Elastic Database Jobs is available.
If you have an existing customer hosted version of Elastic Database Jobs, migration cmdlets and scripts are
provided for easily migrating to the latest version.

Prerequisites
The upgraded version of Elastic Database jobs has a new set of PowerShell cmdlets for use during migration.
These new cmdlets transfer all of your existing job credentials, targets (including databases, servers, custom
collections), job triggers, job schedules, job contents, and jobs over to a new Elastic Job agent.
Install the latest Elastic Jobs cmdlets
If you don't already have an Azure subscription, create a free account before you begin.
Install the Az.Sql 1.1.1-preview module to get the latest Elastic Job cmdlets. Run the following commands in
PowerShell with administrative access.

# Installs the latest PackageManagement powershell package which PowerShellGet v1.6.5 is dependent on
Find-Package PackageManagement -RequiredVersion 1.1.7.2 | Install-Package -Force

# Installs the latest PowerShellGet module which adds the -AllowPrerelease flag to Install-Module
Find-Package PowerShellGet -RequiredVersion 1.6.5 | Install-Package -Force

# Restart your powershell session with administrative access

# Places Az.Sql preview cmdlets side by side with existing Az.Sql version
Install-Module -Name Az.Sql -RequiredVersion 1.1.1-preview -AllowPrerelease

# Import the Az.Sql module


Import-Module Az.Sql -RequiredVersion 1.1.1

# Confirm if module successfully imported - if the imported version is 1.1.1, then continue
Get-Module Az.Sql

Create a new Elastic Job agent


After installing the new cmdlets, create a new Elastic Job agent.

# Register your subscription for the for the Elastic Jobs public preview feature
Register-AzProviderFeature -FeatureName sqldb-JobAccounts -ProviderNamespace Microsoft.Sql

# Get an existing database to use as the job database - or create a new one if necessary
$db = Get-AzSqlDatabase -ResourceGroupName <resourceGroupName> -ServerName <serverName> -DatabaseName
<databaseName>
# Create a new elastic job agent
$agent = $db | New-AzSqlElasticJobAgent -Name <agentName>

Install the old Elastic Database Jobs cmdlets


Migration needs to use some of the old elastic job cmdlets, so run the following commands if you don't already
have them installed.
# Install the old elastic job cmdlets if necessary and initialize the old jobs cmdlets
.\nuget install Microsoft.Azure.SqlDatabase.Jobs -prerelease

# Install the old jobs cmdlets


cd Microsoft.Azure.SqlDatabase.Jobs.x.x.xxxx.x*\tools
Unblock-File .\InstallElasticDatabaseJobsCmdlets.ps1
.\InstallElasticDatabaseJobsCmdlets.ps1

# Choose the subscription where your existing jobs are


Select-AzSubscription -SubscriptionId <subscriptionId>
Use-AzureSqlJobConnection -CurrentAzureSubscription -Credential (Get-Credential)

Migration
Now that both the old and new Elastic Jobs cmdlets are initialized, migrate your job credentials, targets, and jobs
to the new job database.
Setup

$ErrorActionPreference = "Stop";

# Helper function to show starting write output


function Log-StartOutput ($output) {
Write-Output ("`r--------------------- " + $output + " ---------------------")
}

# Helper function to show starting write output


function Log-ChildOutput ($output) {
Write-Output (" - " + $output)
}

Migrate credentials

function Migrate-Credentials ($agent) {


Log-StartOutput "Migrating credentials"

$oldCreds = Get-AzureSqlJobCredential
$oldCreds | % {
$oldCredName = $_.CredentialName
$oldUserName = $_.UserName
Write-Output ("Credential " + $oldCredName)
$oldCredential = Get-Credential -UserName $oldUserName `
-Message ("Please enter in the password that was used for your credential " +
$oldCredName)
try
{
$cred = New-AzSqlElasticJobCredential -ParentObject $agent -Name $oldCredName -Credential
$oldCredential
}
catch [System.Management.Automation.PSArgumentException]
{
$cred = Get-AzSqlElasticJobCredential -ParentObject $agent -Name $oldCredName
$cred = Set-AzSqlElasticJobCredential -InputObject $cred -Credential $oldCredential
}

Log-ChildOutput ("Added user " + $oldUserName)


}
}

To migrate your credentials, execute the following command by passing in the $agent PowerShell object from
earlier.
Migrate-Credentials $agent

Sample output

# You should see similar output after executing the above


# --------------------- Migrating credentials ---------------------
# Credential cred1
# - Added user user1
# Credential cred2
# - Added user user2
# Credential cred3
# - Added user user3

Migrate targets

function Migrate-TargetGroups ($agent) {


Log-StartOutput "Migrating target groups"

# Setup hash of target groups


$targetGroups = [ordered]@{}

# Fetch root job targets from old service


$rootTargets = Get-AzureSqlJobTarget

# Return if no root targets are found


if ($rootTargets.Count -eq 0)
{
Write-Output "No targets found - no need for migration"
return
}

# Create list of target groups to create


# We format the target group name as such:
# - If root target is server type, then target group name is "(serverName)"
# - If root target is database type, then target group name is "(serverName,databaseName)"
# - If root target is shard map type, then target group name is "(serverName,databaseName,shardMapName)"
# - If root target is custom collection, then target group name is "customCollectionName"
$rootTargets | % {
$tgName = Format-OldTargetName -target $_
$childTargets = Get-ChildTargets -target $_
$targetGroups.Add($tgName, $childTargets)
}

# Flatten list
for ($i=$targetGroups.Count - 1; $i -ge 0; $i--)
{
# Fetch target group's initial list of targets unexpanded
$targets = $targetGroups[$i]

# Expand custom collection targets


$j = 0;
while ($j -lt $targets.Count)
{
$target = $targets[$j]
if ($target.TargetType -eq "CustomCollection")
{
$targets = [System.Collections.ArrayList] $targets
$targets.Remove($target) # Remove this target from the list

$expandedTargets = $targetGroups[$target.TargetDescription.CustomCollectionName]

foreach ($expandedTarget in $expandedTargets)


{
$targets.Add($expandedTarget) | Out-Null
}
}

# Set updated list of targets for tg


$targetGroups[$i] = $targets
# Note we don't increment here in case we need to expand further
}
else
{
# Skip if no custom collection target needs to be expanded
$j++
}
}
}

# Add targets to target group


foreach ($targetGroup in $targetGroups.Keys)
{
$tg = Setup-TargetGroup -tgName $targetGroup -agent $agent
$targets = $targetGroups[$targetGroup]
Migrate-Targets -targets $targets -tg $tg
$targetsAdded = (Get-AzSqlElasticJobTargetGroup -ParentObject $agent -Name
$tg.TargetGroupName).Targets
foreach ($targetAdded in $targetsAdded)
{
Log-ChildOutput ("Added target " + (Format-NewTargetName $targetAdded))
}
}
}

## Target group helpers


# Migrate shard map target from old jobs to new job's target group
function Migrate-Targets ($targets, $tg) {
Write-Output ("Target group " + $tg.TargetGroupName)
foreach ($target in $targets) {
if ($target.TargetType -eq "Server") {
Add-ServerTarget -target $target -tg $tg
}
elseif ($target.TargetType -eq "Database") {
Add-DatabaseTarget -target $target -tg $tg
}
elseif ($target.TargetType -eq "ShardMap") {
Add-ShardMapTarget -target $target -tg $tg
}
}
}

# Migrate server target from old jobs to new job's target group
function Add-ServerTarget ($target, $tg) {
$jobTarget = Get-AzureSqlJobTarget -TargetId $target.TargetId
$serverName = $jobTarget.ServerName
$credName = $jobTarget.MasterDatabaseCredentialName
$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -RefreshCredentialName $credName
}

# Migrate database target from old jobs to new job's target group
function Add-DatabaseTarget ($target, $tg) {
$jobTarget = Get-AzureSqlJobTarget -TargetId $target.TargetId
$serverName = $jobTarget.ServerName
$databaseName = $jobTarget.DatabaseName
$exclude = $target.Membership

if ($exclude -eq "Exclude") {


$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -DatabaseName $databaseName -
Exclude
}
else {
$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -DatabaseName $databaseName
}
}
# Migrate shard map target from old jobs to new job's target group
function Add-ShardMapTarget ($target, $tg) {
$jobTarget = Get-AzureSqlJobTarget -TargetId $target.TargetId
$smName = $jobTarget.ShardMapName
$serverName = $jobTarget.ShardMapManagerServerName
$databaseName = $jobTarget.ShardMapManagerDatabaseName
$credName = $jobTarget.ShardMapManagerCredentialName
$exclude = $target.Membership

if ($exclude -eq "Exclude") {


$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -ShardMapName $smName -
DatabaseName $databasename -RefreshCredentialName $credName -Exclude
}
else {
$t = Add-AzSqlElasticJobTarget -ParentObject $tg -ServerName $serverName -ShardMapName $smName -
DatabaseName $databasename -RefreshCredentialName $credName
}
}

# Helper to format target old target names


function Format-OldTargetName ($target) {
if ($target.TargetType -eq "Server") {
$tgName = "(" + $target.ServerName + ")"
}
elseif ($target.TargetType -eq "Database") {
$tgName = "(" + $target.ServerName + "," + $target.DatabaseName + ")"
}
elseif ($target.TargetType -eq "ShardMap") {
$tgName = "(" + $target.ShardMapManagerServerName + "," +
$target.ShardMapManagerDatabaseName + "," + `
$target.ShardMapName + ")"
}
elseif ($target.TargetType -eq "CustomCollection") {
$tgName = $target.CustomCollectionName
}

return $tgName
}

# Helper to format new target names


function Format-NewTargetName ($target) {
if ($target.TargetType -eq "SqlServer") {
$tgName = "(" + $target.TargetServerName + ")"
}
elseif ($target.TargetType -eq "SqlDatabase") {
$tgName = "(" + $target.TargetServerName + "," + $target.TargetDatabaseName + ")"
}
elseif ($target.TargetType -eq "SqlShardMap") {
$tgName = "(" + $target.TargetServerName + "," +
$target.TargetDatabaseName + "," + `
$target.TargetShardMapName + ")"
}
elseif ($target.TargetType -eq "SqlElasticPool") {
$tgName = "(" + $target.TargetServerName + "," +
$target.TargetDatabaseName + "," + `
$target.TargetElasticPoolName + ")"
}

return $tgName
}

# Get child targets


function Get-ChildTargets($target) {
if ($target.TargetType -eq "CustomCollection") {
$children = Get-AzureSqlJobChildTarget -TargetId $target.TargetId
if ($children.Count -eq 1)
{
$arr = New-Object System.Collections.ArrayList($null)
$arr.Add($children)
$children = $arr
}
return $children
}
else {
return $target
}
}

# Migrates target groups


function Setup-TargetGroup ($tgName, $agent) {
try {
$tg = New-AzSqlElasticJobTargetGroup -ParentObject $agent -Name $tgName
return $tg
}
catch [System.Management.Automation.PSArgumentException] {
$tg = Get-AzSqlElasticJobTargetGroup -ParentObject $agent -Name $tgName
return $tg
}
}

To migrate your targets (servers, databases, and custom collections) to your new job database, execute the
Migrate-TargetGroups cmdlet to perform the following:
Root level targets that are servers and databases will be migrated to a new target group named "
(<serverName>, <databaseName>)" containing only the root level target.
A custom collection will migrate to a new target group containing all child targets.

Migrate-TargetGroups $agent

Sample output:

# --------------------- Migrating target groups ---------------------


# Target group cc1
# - Added target (s1)
# - Added target (s1,db1)
# Target group cc2
# - Added target (s1,db1)
# Target group cc3
# - Added target (s1)
# - Added target (s1,db1)
# Target group (s1,db1)
# - Added target (s1,db1)
# Target group (s1,db2)
# - Added target (s1,db2)
# Target group (s1)
# - Added target (s1)
# Target group (s1,db1,sm1)
# - Added target (s1,db1,sm1)

Migrate jobs

function Migrate-Jobs ($agent)


{
Log-StartOutput "Migrating jobs and job steps"

$oldJobs = Get-AzureSqlJob
$newJobs = [System.Collections.ArrayList] @()

foreach ($oldJob in $oldJobs)


{
# Ignore system jobs
# Ignore system jobs
if ($oldJob.ContentName -eq $null)
{
continue
}

# Schedule
$oldJobTriggers = Get-AzureSqlJobTrigger -JobName $oldJob.JobName

if ($oldJobTriggers.Count -ge 1)
{
foreach ($trigger in $oldJobTriggers)
{

$schedule = Get-AzureSqlJobSchedule -ScheduleName $trigger.ScheduleName


$newJob = [PSCustomObject] @{
JobName = ($trigger.JobName + " (" + $trigger.ScheduleName + ")");
Description = $oldJob.ContentName
Schedule = $schedule
TargetGroupName = (Format-OldTargetName(Get-AzureSqlJobTarget -TargetId
$oldJob.TargetId))
CredentialName = $oldJob.CredentialName
Output = $oldJob.ResultSetDestination
}
$newJobs.Add($newJob) | Out-Null
}
}
else
{
$newJob = [PSCustomObject] @{
JobName = $oldJob.JobName
Description = $oldJob.ContentName
Schedule = $null
TargetGroupName = (Format-OldTargetName(Get-AzureSqlJobTarget -TargetId $oldJob.TargetId))
CredentialName = $oldJob.CredentialName
Output = $oldJob.ResultSetDestination
}
$newJobs.Add($newJob) | Out-Null
}
}

# At this point, we should have an organized list of jobs to create


foreach ($newJob in $newJobs)
{
Write-Output ("Job " + $newJob.JobName)
$job = Setup-Job $newJob $agent
If ($job.Interval -ne $null)
{
Log-ChildOutput ("Schedule with start time " + $job.StartTime + " and end time at " +
$job.EndTime)
Log-ChildOutput ("Repeats every " + $job.Interval)
}
else {
Log-ChildOutput ("Repeats once")
}

Setup-JobStep $newJob $job


}
}

# Migrates jobs
function Setup-Job ($job, $agent) {
$jobName = $newJob.JobName
$jobDescription = $newJob.Description

# Create or update a job has a recurring schedule


if ($newJob.Schedule -ne $null) {
$schedule = $newJob.Schedule
$startTime = $schedule.StartTime.UtcTime
$endTime = $schedule.EndTime.UtcTime
$endTime = $schedule.EndTime.UtcTime
$intervalType = $schedule.Interval.IntervalType.ToString()
$intervalType = $intervalType.Substring(0, $intervalType.Length - 1) # Remove the last letter (s)
$intervalCount = $schedule.Interval.Count

try {
$job = New-AzSqlElasticJob -ParentObject $agent -Name $jobName `
-Description $jobDescription -IntervalType $intervalType -IntervalCount $intervalCount `
-StartTime $startTime -EndTime $endTime
return $job
}
catch [System.Management.Automation.PSArgumentException] {
$job = Get-AzSqlElasticJob -ParentObject $agent -Name $jobName
$job = $job | Set-AzSqlElasticJob -Description $jobDescription -IntervalType $intervalType -
IntervalCount $intervalCount `
-StartTime $startTime -EndTime $endTime
return $job
}
}
# Create or update a job that runs once
else {
try {
$job = New-AzSqlElasticJob -ParentObject $agent -Name $jobName `
-Description $jobDescription -RunOnce
return $job
}
catch [System.Management.Automation.PSArgumentException] {
$job = Get-AzSqlElasticJob -ParentObject $agent -Name $jobName
$job = $job | Set-AzSqlElasticJob -Description $jobDescription -RunOnce
return $job
}
}
}
# Migrates job steps
function Setup-JobStep ($newJob, $job) {
$defaultJobStepName = 'JobStep'
$contentName = $newJob.Description
$commandText = (Get-AzureSqlJobContentDefinition -ContentName $contentName).CommandText
$targetGroupName = $newJob.TargetGroupName
$credentialName = $newJob.CredentialName

$output = $newJob.Output

if ($output -ne $null) {


$outputServerName = $output.TargetDescription.ServerName
$outputDatabaseName = $output.TargetDescription.DatabaseName
$outputCredentialName = $output.CredentialName
$outputSchemaName = $output.SchemaName
$outputTableName = $output.TableName
$outputDatabase = Get-AzSqlDatabase -ResourceGroupName $job.ResourceGroupName -ServerName
$outputServerName -Databasename $outputDatabaseName

try {
$jobStep = $job | Add-AzSqlElasticJobStep -Name $defaultJobStepName `
-TargetGroupName $targetGroupName -CredentialName $credentialName -CommandText $commandText `
-OutputDatabaseObject $outputDatabase `
-OutputSchemaName $outputSchemaName -OutputTableName $outputTableName `
-OutputCredentialName $outputCredentialName
}
catch [System.Management.Automation.PSArgumentException] {
$jobStep = $job | Get-AzSqlElasticJobStep -Name $defaultJobStepName
$jobStep = $jobStep | Set-AzSqlElasticJobStep -TargetGroupName $targetGroupName `
-CredentialName $credentialName -CommandText $commandText `
-OutputDatabaseObject $outputDatabase `
-OutputSchemaName $outputSchemaName -OutputTableName $outputTableName `
-OutputCredentialName $outputCredentialName
}
}
else {
try {
$jobStep = $job | Add-AzSqlElasticJobStep -Name $defaultJobStepName -TargetGroupName $targetGroupName
-CredentialName $credentialName -CommandText $commandText
}
catch [System.Management.Automation.PSArgumentException] {
$jobStep = $job | Get-AzSqlElasticJobStep -Name $defaultJobStepName
$jobStep = $jobStep | Set-AzSqlElasticJobStep -TargetGroupName $targetGroupName -CredentialName
$credentialName -CommandText $commandText
}
}
Log-ChildOutput ("Added step " + $jobStep.StepName + " using target group " + $jobStep.TargetGroupName + "
using credential " + $jobStep.CredentialName)
Log-ChildOutput("Command text script taken from content name " + $contentName)

if ($jobStep.Output -ne $null) {


Log-ChildOutput ("With output target as (" + $jobStep.Output.ServerName + "," +
$jobStep.Output.DatabaseName + "," + $jobStep.Output.SchemaName + "," + $jobStep.Output.TableName + ")")
}
}

To migrate your jobs, job content, job triggers, and job schedules over to your new Elastic Job agent's database,
execute the Migrate-Jobs cmdlet passing in your agent.
Jobs with multiple triggers with different schedules are separated into multiple jobs with naming scheme: "
<jobName> (<scheduleName>)".
Job contents are migrated to a job by adding a default job step named JobStep with associated command
text.
Jobs are disabled by default so that you can validate them before enabling them.

Migrate-Jobs $agent

Sample output:

--------------------- Migrating jobs and job steps ---------------------


Job job1
- Repeats once
- Added step JobStep using target group cc2 using credential cred1
- Command text script taken from content name SampleContext
Job job2
- Repeats once
- Added step JobStep using target group (s1,db1) using credential cred1
- Command text script taken from content name SampleContent
- With output target as (s1,db1,dbo,sampleTable)
Job job3 (repeat every 10 min)
- Schedule with start time 05/16/2018 22:05:28 and end time at 12/31/9999 11:59:59
- Repeats every PT10M
- Added step JobStep using target group cc1 using credential cred1
- Command text script taken from content name SampleContent
Job job3 (repeat every 5 min)
- Schedule with start time 05/16/2018 22:05:31 and end time at 12/31/9999 11:59:59
- Repeats every PT5M
- Added step JobStep using target group cc1 using credential cred1
- Command text script taken from content name SampleContent
Job job4
- Repeats once
- Added step JobStep using target group (s1,db1) using credential cred1
- Command text script taken from content name SampleContent

Migration Complete
The job database should now have all of the job credentials, targets, job triggers, job schedules, job contents,
and jobs migrated over.
To confirm that everything migrated correctly, use the following scripts:

$creds = $agent | Get-AzSqlElasticJobCredential


$targetGroups = $agent | Get-AzSqlElasticJobTargetGroup
$jobs = $agent | Get-AzSqlElasticJob
$steps = $jobs | Get-AzSqlElasticJobStep

To test that jobs are executing correctly, start them:

$jobs | Start-AzSqlElasticJob

For any jobs that were running on a schedule, remember to enable them so that they can run in the background:

$jobs | Set-AzSqlElasticJob -Enable

Next steps
Create and manage Elastic Jobs using PowerShell
Create and manage Elastic Jobs using Transact-SQL (T-SQL)
Write audit to a storage account behind VNet and
firewall
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


Auditing for Azure SQL Database and Azure Synapse Analytics supports writing database events to an Azure
Storage account behind a virtual network and firewall.
This article explains two ways to configure Azure SQL Database and Azure storage account for this option. The
first uses the Azure portal, the second uses REST.

Background
Azure Virtual Network (VNet) is the fundamental building block for your private network in Azure. VNet enables
many types of Azure resources, such as Azure Virtual Machines (VM), to securely communicate with each other,
the internet, and on-premises networks. VNet is similar to a traditional network in your own data center, but
brings with it additional benefits of Azure infrastructure such as scale, availability, and isolation.
To learn more about the VNet concepts, Best practices and many more, see What is Azure Virtual Network.
To learn more about how to create a virtual network, see Quickstart: Create a virtual network using the Azure
portal.

Prerequisites
For audit to write to a storage account behind a VNet or firewall, the following prerequisites are required:
A general-purpose v2 storage account. If you have a general-purpose v1 or blob storage account, upgrade to
a general-purpose v2 storage account. For more information, see Types of storage accounts.
The storage account must be on the same tenant and at the same location as the logical SQL server (it's OK
to be on different subscriptions).
The Azure Storage account requires Allow trusted Microsoft services to access this storage account . Set
this on the Storage Account Firewalls and Vir tual networks .
You must have Microsoft.Authorization/roleAssignments/write permission on the selected storage account.
For more information, see Azure built-in roles.

Configure in Azure portal


Connect to Azure portal with your subscription. Navigate to the resource group and server.
1. Click on Auditing under the Security heading. Select On .
2. Select Storage . Select the storage account where logs will be saved. The storage account must comply
with the requirements listed in Prerequisites.
3. Open Storage details
NOTE
If the selected Storage account is behind VNet, you will see the following message:
You have selected a storage account that is behind a firewall or in a virtual network. Using this
storage requires to enable 'Allow trusted Microsoft services to access this storage account' on the
storage account and creates a server managed identity with 'storage blob data contributor' RBAC.

If you do not see this message, then storage account is not behind a VNet.

4. Select the number of days for the retention period. Then click OK . Logs older than the retention period
are deleted.
5. Select Save on your auditing settings.
You have successfully configured audit to write to a storage account behind a VNet or firewall.

Configure with REST commands


As an alternative to using the Azure portal, you can use REST commands to configure audit to write database
events on a storage account behind a VNet and Firewall.
The sample scripts in this section require you to update the script before you run them. Replace the following
values in the scripts:

SA M P L E VA L UE SA M P L E DESC RIP T IO N

<subscriptionId> Azure subscription ID

<resource group> Resource group

<logical SQL Server> Server name

<administrator login> Administrator account

<complex password> Complex password for the administrator account

To configure SQL Audit to write events to a storage account behind a VNet or Firewall:
1. Register your server with Azure Active Directory (Azure AD). Use either PowerShell or REST API.
PowerShell

Connect-AzAccount
Select-AzSubscription -SubscriptionId <subscriptionId>
Set-AzSqlServer -ResourceGroupName <your resource group> -ServerName <azure server name> -
AssignIdentity

REST API :
Sample request

PUT https://management.azure.com/subscriptions/<subscription ID>/resourceGroups/<resource


group>/providers/Microsoft.Sql/servers/<azure server name>?api-version=2015-05-01-preview

Request body
{
"identity": {
"type": "SystemAssigned",
},
"properties": {
"fullyQualifiedDomainName": "<azure server name>.database.windows.net",
"administratorLogin": "<administrator login>",
"administratorLoginPassword": "<complex password>",
"version": "12.0",
"state": "Ready"
}
}

2. Assign the Storage Blob Data Contributor role to the server hosting the database that you registered with
Azure Active Directory (Azure AD) in the previous step.
For detailed steps, see Assign Azure roles using the Azure portal.

NOTE
Only members with Owner privilege can perform this step. For various Azure built-in roles, refer to Azure built-in
roles.

3. Configure the server's blob auditing policy, without specifying a storageAccountAccessKey:


Sample request

PUT https://management.azure.com/subscriptions/<subscription ID>/resourceGroups/<resource


group>/providers/Microsoft.Sql/servers/<azure server name>/auditingSettings/default?api-version=2017-
03-01-preview

Request body

{
"properties": {
"state": "Enabled",
"storageEndpoint": "https://<storage account>.blob.core.windows.net"
}
}

Using Azure PowerShell


Create or Update Database Auditing Policy (Set-AzSqlDatabaseAudit)
Create or Update Server Auditing Policy (Set-AzSqlServerAudit)

Using Azure Resource Manager template


You can configure auditing to write database events on a storage account behind virtual network and firewall
using Azure Resource Manager template, as shown in the following example:

IMPORTANT
In order to use storage account behind virtual network and firewall, you need to set isStorageBehindVnet parameter to
true
Deploy an Azure SQL Server with Auditing enabled to write audit logs to a blob storage

NOTE
The linked sample is on an external public repository and is provided 'as is', without warranty, and are not supported
under any Microsoft support program/service.

Next steps
Use PowerShell to create a virtual network service endpoint, and then a virtual network rule for Azure SQL
Database.
Virtual Network Rules: Operations with REST APIs
Use virtual network service endpoints and rules for servers
Configure Advanced Threat Protection for Azure
SQL Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Advanced Threat Protection for Azure SQL Database detects anomalous activities indicating unusual and
potentially harmful attempts to access or exploit databases. Advanced Threat Protection can identify Potential
SQL injection , Access from unusual location or data center , Access from unfamiliar principal or
potentially harmful application , and Brute force SQL credentials - see more details in Advanced Threat
Protection alerts.
You can receive notifications about the detected threats via email notifications or Azure portal
Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central
Microsoft Defender for SQL portal.

Set up Advanced Threat Protection in the Azure portal


1. Sign into the Azure portal.
2. Navigate to the configuration page of the server you want to protect. In the security settings, select
Microsoft Defender for Cloud .
3. On the Microsoft Defender for Cloud configuration page:
a. If Microsoft Defender for SQL hasn't yet been enabled, select Enable Microsoft Defender for
SQL .
b. Select Configure .

c. Under ADVANCED THREAT PROTECTION SETTINGS , select Add your contact details to
the subscription's email settings in Defender for Cloud .
d. Provide the list of emails to receive notifications upon detection of anomalous database activities
in the Additional email addresses (separated by commas) text box.
e. Optionally customize the severity of alerts that will trigger notifications to be sent under
Notification types .
f. Select Save .

Set up Advanced Threat Protection using PowerShell


For a script example, see Configure auditing and Advanced Threat Protection using PowerShell.
Next steps
Learn more about Advanced Threat Protection and Microsoft Defender for SQL in the following articles:
Advanced Threat Protection
Advanced Threat Protection in SQL Managed Instance
Microsoft Defender for SQL
Auditing for Azure SQL Database and Azure Synapse Analytics
Microsoft Defender for Cloud
For more information on pricing, see the SQL Database pricing page
Get started with SQL Database dynamic data
masking with the Azure portal
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article shows you how to implement dynamic data masking with the Azure portal. You can also implement
dynamic data masking using Azure SQL Database cmdlets or the REST API.

NOTE
This feature cannot be set using portal for SQL Managed Instance (use PowerShell or REST API). For more information, see
Dynamic Data Masking.

Enable dynamic data masking


1. Launch the Azure portal at https://portal.azure.com.
2. Go to your database resource in the Azure portal.
3. Select the Dynamic Data Masking blade under the Security section.

4. In the Dynamic Data Masking configuration page, you may see some database columns that the
recommendations engine has flagged for masking. In order to accept the recommendations, just click
Add Mask for one or more columns and a mask is created based on the default type for this column. You
can change the masking function by clicking on the masking rule and editing the masking field format to
a different format of your choice. Be sure to click Save to save your settings.
5. To add a mask for any column in your database, at the top of the Dynamic Data Masking configuration
page, click Add Mask to open the Add Masking Rule configuration page.

6. Select the Schema , Table and Column to define the designated field for masking.
7. Select how to mask from the list of sensitive data masking categories.
8. Click Add in the data masking rule page to update the set of masking rules in the dynamic data masking
policy.
9. Type the SQL users or Azure Active Directory (Azure AD) identities that should be excluded from masking,
and have access to the unmasked sensitive data. This should be a semicolon-separated list of users. Users
with administrator privileges always have access to the original unmasked data.

TIP
To make it so the application layer can display sensitive data for application privileged users, add the SQL user or
Azure AD identity the application uses to query the database. It is highly recommended that this list contain a
minimal number of privileged users to minimize exposure of the sensitive data.

10. Click Save in the data masking configuration page to save the new or updated masking policy.

Next steps
For an overview of dynamic data masking, see dynamic data masking.
You can also implement dynamic data masking using Azure SQL Database cmdlets or the REST API.
Create server configured with user-assigned
managed identity and customer-managed TDE
7/12/2022 • 6 minutes to read • Edit Online

NOTE
Assigning a user-assigned managed identity for Azure SQL logical servers and Managed Instances is in public preview .

APPLIES TO: Azure SQL Database


This how-to guide outlines the steps to create an Azure SQL logical server configured with transparent data
encryption (TDE) with customer-managed keys (CMK) using a user-assigned managed identity to access Azure
Key Vault.

Prerequisites
This how-to guide assumes that you've already created an Azure Key Vault and imported a key into it to use
as the TDE protector for Azure SQL Database. For more information, see transparent data encryption with
BYOK support.
Soft-delete and Purge protection must be enabled on the key vault
You must have created a user-assigned managed identity and provided it the required TDE permissions (Get,
Wrap Key, Unwrap Key) on the above key vault. For creating a user-assigned managed identity, see Create a
user-assigned managed identity.
You must have Azure PowerShell installed and running.
[Recommended but optional] Create the key material for the TDE protector in a hardware security module
(HSM) or local key store first, and import the key material to Azure Key Vault. Follow the instructions for
using a hardware security module (HSM) and Key Vault to learn more.

Create server configured with TDE with customer-managed key (CMK)


The following steps outline the process of creating a new Azure SQL Database logical server and a new database
with a user-assigned managed identity assigned. The user-assigned managed identity is required for configuring
a customer-managed key for TDE at server creation time.
Portal
The Azure CLI
PowerShell
ARM Template

1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL databases , leave Resource type set to Single database , and select Create .
4. On the Basics tab of the Create SQL Database form, under Project details , select the desired Azure
Subscription .
5. For Resource group , select Create new , enter a name for your resource group, and select OK .
6. For Database name enter ContosoHR .
7. For Ser ver , select Create new , and fill out the New ser ver form with the following values:
Ser ver name : Enter a unique server name. Server names must be globally unique for all servers
in Azure, not just unique within a subscription. Enter something like mysqlserver135 , and the
Azure portal will let you know if it's available or not.
Ser ver admin login : Enter an admin login name, for example: azureuser .
Password : Enter a password that meets the password requirements, and enter it again in the
Confirm password field.
Location : Select a location from the dropdown list

8. Select Next: Networking at the bottom of the page.


9. On the Networking tab, for Connectivity method , select Public endpoint .
10. For Firewall rules , set Add current client IP address to Yes . Leave Allow Azure ser vices and
resources to access this ser ver set to No .
11. Select Next: Security at the bottom of the page.
12. On the Security tab, under Identity (preview) , select Configure Identities .

13. On the Identity (preview) blade, select User assigned managed identity and then select Add . Select
the desired Subscription and then under User assigned managed identities select the desired user-
assigned managed identity from the selected subscription. Then select the Select button.
14. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.

15. Select Apply


16. On the Security tab, under Transparent data encr yption , select Configure Transparent data
encr yption . Then select Select a key and select Change key . Select the desired Subscription , Key
vault , Key , and Version for the customer-managed key to be used for TDE. Select the Select button.

17. Select Apply


18. Select Review + create at the bottom of the page
19. On the Review + create page, after reviewing, select Create .

Next steps
Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: Turn on TDE using
your own key from Key Vault.
Azure SQL Database and Azure Synapse IP firewall
rules
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics


When you create a new server in Azure SQL Database or Azure Synapse Analytics named mysqlserver, for
example, a server-level firewall blocks all access to the public endpoint for the server (which is accessible at
mysqlserver.database.windows.net). For simplicity, SQL Database is used to refer to both SQL Database and
Azure Synapse Analytics.

IMPORTANT
This article does not apply to Azure SQL Managed Instance. For information about network configuration, see Connect
your application to Azure SQL Managed Instance.
Azure Synapse only supports server-level IP firewall rules. It doesn't support database-level IP firewall rules.

How the firewall works


Connection attempts from the internet and Azure must pass through the firewall before they reach your server
or database, as the following diagram shows.
Server-level IP firewall rules
These rules enable clients to access your entire server, that is, all the databases managed by the server. The rules
are stored in the master database. The default value is up to 256 server-level IP firewall rules for a server. If you
have the Allow Azure Ser vices and resources to access this ser ver setting enabled, this counts as a
single firewall rule for the server.
You can configure server-level IP firewall rules by using the Azure portal, PowerShell, or Transact-SQL
statements.
NOTE
The maximum number of server-level IP firewall rules is limited to 128 when configuring using the Azure portal.

To use the portal or PowerShell, you must be the subscription owner or a subscription contributor.
To use Transact-SQL, you must connect to the master database as the server-level principal login or as the
Azure Active Directory administrator. (A server-level IP firewall rule must first be created by a user who has
Azure-level permissions.)

NOTE
By default, during creation of a new logical SQL server from the Azure portal, the Allow Azure Ser vices and
resources to access this ser ver setting is set to No .

Database -level IP firewall rules


Database-level IP firewall rules enable clients to access certain (secure) databases. You create the rules for each
database (including the master database), and they're stored in the individual database.
You can only create and manage database-level IP firewall rules for master and user databases by using
Transact-SQL statements and only after you configure the first server-level firewall.
If you specify an IP address range in the database-level IP firewall rule that's outside the range in the server-
level IP firewall rule, only those clients that have IP addresses in the database-level range can access the
database.
The default value is up to 256 database-level IP firewall rules for a database. For more information about
configuring database-level IP firewall rules, see the example later in this article and see
sp_set_database_firewall_rule (Azure SQL Database).
Recommendations for how to set firewall rules
We recommend that you use database-level IP firewall rules whenever possible. This practice enhances security
and makes your database more portable. Use server-level IP firewall rules for administrators. Also use them
when you have many databases that have the same access requirements, and you don't want to configure each
database individually.

NOTE
For information about portable databases in the context of business continuity, see Authentication requirements for
disaster recovery.

Server-level versus database-level IP firewall rules


Should users of one database be fully isolated from another database?
If yes, use database-level IP firewall rules to grant access. This method avoids using server-level IP firewall rules,
which permit access through the firewall to all databases. That would reduce the depth of your defenses.
Do users at the IP addresses need access to all databases?
If yes, use server-level IP firewall rules to reduce the number of times that you have to configure IP firewall rules.
Does the person or team who configures the IP firewall rules only have access through the Azure portal,
PowerShell, or the REST API?
If so, you must use server-level IP firewall rules. Database-level IP firewall rules can only be configured through
Transact-SQL.
Is the person or team who configures the IP firewall rules prohibited from having high-level permission at the
database level?
If so, use server-level IP firewall rules. You need at least CONTROL DATABASE permission at the database level to
configure database-level IP firewall rules through Transact-SQL.
Does the person or team who configures or audits the IP firewall rules centrally manage IP firewall rules for
many (perhaps hundreds) of databases?
In this scenario, best practices are determined by your needs and environment. Server-level IP firewall rules
might be easier to configure, but scripting can configure rules at the database-level. And even if you use server-
level IP firewall rules, you might need to audit database-level IP firewall rules to see if users with CONTROL
permission on the database create database-level IP firewall rules.
Can I use a mix of server-level and database-level IP firewall rules?
Yes. Some users, such as administrators, might need server-level IP firewall rules. Other users, such as users of a
database application, might need database-level IP firewall rules.
Connections from the internet
When a computer tries to connect to your server from the internet, the firewall first checks the originating IP
address of the request against the database-level IP firewall rules for the database that the connection requests.
If the address is within a range that's specified in the database-level IP firewall rules, the connection is
granted to the database that contains the rule.
If the address isn't within a range in the database-level IP firewall rules, the firewall checks the server-level IP
firewall rules. If the address is within a range that's in the server-level IP firewall rules, the connection is
granted. Server-level IP firewall rules apply to all databases managed by the server.
If the address isn't within a range that's in any of the database-level or server-level IP firewall rules, the
connection request fails.

NOTE
To access Azure SQL Database from your local computer, ensure that the firewall on your network and local computer
allow outgoing communication on TCP port 1433.

Connections from inside Azure


To allow applications hosted inside Azure to connect to your SQL server, Azure connections must be enabled. To
enable Azure connections, there must be a firewall rule with starting and ending IP addresses set to 0.0.0.0. This
recommended rule is only applicable to Azure SQL Database.
When an application from Azure tries to connect to the server, the firewall checks that Azure connections are
allowed by verifying this firewall rule exists. This can be turned on directly from the Azure portal blade by
switching the Allow Azure Ser vices and resources to access this ser ver to ON in the Firewalls and
vir tual networks settings. Switching the setting to ON creates an inbound firewall rule for IP 0.0.0.0 - 0.0.0.0
named AllowAllWindowsAzureIps . The rule can be viewed in your master database sys.firewall_rules view.
Use PowerShell or the Azure CLI to create a firewall rule with start and end IP addresses set to 0.0.0.0 if you’re
not using the portal.
IMPORTANT
This option configures the firewall to allow all connections from Azure, including connections from the subscriptions of
other customers. If you select this option, make sure that your login and user permissions limit access to authorized users
only.

Permissions
To be able to create and manage IP firewall rules for the Azure SQL Server, you will need to either be:
in the SQL Server Contributor role
in the SQL Security Manager role
the owner of the resource that contains the Azure SQL Server

Create and manage IP firewall rules


You create the first server-level firewall setting by using the Azure portal or programmatically by using Azure
PowerShell, Azure CLI, or an Azure REST API. You create and manage additional server-level IP firewall rules by
using these methods or Transact-SQL.

IMPORTANT
Database-level IP firewall rules can only be created and managed by using Transact-SQL.

To improve performance, server-level IP firewall rules are temporarily cached at the database level. To refresh
the cache, see DBCC FLUSHAUTHCACHE.

TIP
You can use Database Auditing to audit server-level and database-level firewall changes.

Use the Azure portal to manage server-level IP firewall rules


To set a server-level IP firewall rule in the Azure portal, go to the overview page for your database or your
server.

TIP
For a tutorial, see Create a database using the Azure portal.

From the database overview page


1. To set a server-level IP firewall rule from the database overview page, select Set ser ver firewall on the
toolbar, as the following image shows.
The Firewall settings page for the server opens.
2. Select Add client IP on the toolbar to add the IP address of the computer that you're using, and then
select Save . A server-level IP firewall rule is created for your current IP address.

From the server overview page


The overview page for your server opens. It shows the fully qualified server name (such as
mynewserver20170403.database.windows.net) and provides options for further configuration.
1. To set a server-level rule from this page, select Firewall from the Settings menu on the left side.
2. Select Add client IP on the toolbar to add the IP address of the computer that you're using, and then
select Save . A server-level IP firewall rule is created for your current IP address.
Use Transact-SQL to manage IP firewall rules
C ATA LO G VIEW O R STO RED
P RO C EDURE L EVEL DESC RIP T IO N

sys.firewall_rules Server Displays the current server-level IP


firewall rules

sp_set_firewall_rule Server Creates or updates server-level IP


firewall rules

sp_delete_firewall_rule Server Removes server-level IP firewall rules

sys.database_firewall_rules Database Displays the current database-level IP


firewall rules

sp_set_database_firewall_rule Database Creates or updates the database-level


IP firewall rules

sp_delete_database_firewall_rule Databases Removes database-level IP firewall


rules

The following example reviews the existing rules, enables a range of IP addresses on the server Contoso, and
deletes an IP firewall rule:

SELECT * FROM sys.firewall_rules ORDER BY name;

Next, add a server-level IP firewall rule.

EXECUTE sp_set_firewall_rule @name = N'ContosoFirewallRule',


@start_ip_address = '192.168.1.1', @end_ip_address = '192.168.1.10'

To delete a server-level IP firewall rule, execute the sp_delete_firewall_rule stored procedure. The following
example deletes the rule ContosoFirewallRule:

EXECUTE sp_delete_firewall_rule @name = N'ContosoFirewallRule'

Use PowerShell to manage server-level IP firewall rules

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all development is now for
the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az and AzureRm
modules are substantially identical.
C M DL ET L EVEL DESC RIP T IO N

Get-AzSqlServerFirewallRule Server Returns the current server-level firewall


rules

New-AzSqlServerFirewallRule Server Creates a new server-level firewall rule

Set-AzSqlServerFirewallRule Server Updates the properties of an existing


server-level firewall rule

Remove-AzSqlServerFirewallRule Server Removes server-level firewall rules

The following example uses PowerShell to set a server-level IP firewall rule:

New-AzSqlServerFirewallRule -ResourceGroupName "myResourceGroup" `


-ServerName $servername `
-FirewallRuleName "ContosoIPRange" -StartIpAddress "192.168.1.0" -EndIpAddress "192.168.1.255"

TIP
For $servername specify the server name and not the fully qualified DNS name e.g. specify mysqldbser ver instead of
mysqldbser ver.database.windows.net
For PowerShell examples in the context of a quickstart, see Create DB - PowerShell and Create a single database and
configure a server-level IP firewall rule using PowerShell.

Use CLI to manage server-level IP firewall rules


C M DL ET L EVEL DESC RIP T IO N

az sql server firewall-rule create Server Creates a server IP firewall rule

az sql server firewall-rule list Server Lists the IP firewall rules on a server

az sql server firewall-rule show Server Shows the detail of an IP firewall rule

az sql server firewall-rule update Server Updates an IP firewall rule

az sql server firewall-rule delete Server Deletes an IP firewall rule

The following example uses CLI to set a server-level IP firewall rule:

az sql server firewall-rule create --resource-group myResourceGroup --server $servername \


-n ContosoIPRange --start-ip-address 192.168.1.0 --end-ip-address 192.168.1.255

TIP
For $servername specify the server name and not the fully qualified DNS name e.g. specify mysqldbser ver instead of
mysqldbser ver.database.windows.net
For a CLI example in the context of a quickstart, see Create DB - Azure CLI and Create a single database and configure a
server-level IP firewall rule using the Azure CLI.
Use a REST API to manage server-level IP firewall rules
API L EVEL DESC RIP T IO N

List firewall rules Server Displays the current server-level IP


firewall rules

Create or update firewall rules Server Creates or updates server-level IP


firewall rules

Delete firewall rules Server Removes server-level IP firewall rules

Get firewall rules Server Gets server-level IP firewall rules

Troubleshoot the database firewall


Consider the following points when access to Azure SQL Database doesn't behave as you expect.
Local firewall configuration:
Before your computer can access Azure SQL Database, you may need to create a firewall exception on
your computer for TCP port 1433. To make connections inside the Azure cloud boundary, you may have
to open additional ports. For more information, see the "SQL Database: Outside vs inside" section of Ports
beyond 1433 for ADO.NET 4.5 and Azure SQL Database.
Network address translation:
Because of network address translation (NAT), the IP address that's used by your computer to connect to
Azure SQL Database may be different than the IP address in your computer's IP configuration settings. To
view the IP address that your computer is using to connect to Azure:
1. Sign in to the portal.
2. Go to the Configure tab on the server that hosts your database.
3. The Current Client IP Address is displayed in the Allowed IP Addresses section. Select Add for
Allowed IP Addresses to allow this computer to access the server.
Changes to the allow list haven't taken effect yet:
There may be up to a five-minute delay for changes to the Azure SQL Database firewall configuration to
take effect.
The login isn't authorized, or an incorrect password was used:
If a login doesn't have permissions on the server or the password is incorrect, the connection to the
server is denied. Creating a firewall setting only gives clients an opportunity to try to connect to your
server. The client must still provide the necessary security credentials. For more information about
preparing logins, see Controlling and granting database access.
Dynamic IP address:
If you have an internet connection that uses dynamic IP addressing and you have trouble getting through
the firewall, try one of the following solutions:
Ask your internet service provider for the IP address range that's assigned to your client computers
that access the server. Add that IP address range as an IP firewall rule.
Get static IP addressing instead for your client computers. Add the IP addresses as IP firewall rules.
Next steps
Confirm that your corporate network environment allows inbound communication from the compute IP
address ranges (including SQL ranges) that are used by the Azure datacenters. You might have to add those
IP addresses to the allow list. See Microsoft Azure datacenter IP ranges.
See our quickstart about creating a single database in Azure SQL Database.
For help with connecting to a database in Azure SQL Database from open-source or third-party applications,
see Client quickstart code samples to Azure SQL Database.
For information about additional ports that you may need to open, see the "SQL Database: Outside vs inside"
section of Ports beyond 1433 for ADO.NET 4.5 and SQL Database
For an overview of Azure SQL Database security, see Securing your database.
PowerShell: Create a Virtual Service endpoint and
VNet rule for Azure SQL Database
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Virtual network rules are one firewall security feature that controls whether the logical SQL server for your
Azure SQL Database databases, elastic pools, or databases in Azure Synapse accept communications that are
sent from particular subnets in virtual networks.

IMPORTANT
This article applies to Azure SQL Database, including Azure Synapse (formerly SQL DW). For simplicity, the term Azure SQL
Database in this article applies to databases belonging to either Azure SQL Database or Azure Synapse. This article does
not apply to Azure SQL Managed Instance because it does not have a service endpoint associated with it.

This article demonstrates a PowerShell script that takes the following actions:
1. Creates a Microsoft Azure Virtual Service endpoint on your subnet.
2. Adds the endpoint to the firewall of your server, to create a virtual network rule.
For more background, see Virtual Service endpoints for Azure SQL Database.

TIP
If all you need is to assess or add the Virtual Service endpoint type name for Azure SQL Database to your subnet, you
can skip ahead to our more direct PowerShell script.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql Cmdlets. For the older module, see AzureRM.Sql. The arguments for the commands in the Az module
and in the AzureRm modules are substantially identical.

Major cmdlets
This article emphasizes the New-AzSqlSer verVir tualNetworkRule cmdlet that adds the subnet endpoint to
the access control list (ACL) of your server, thereby creating a rule.
The following list shows the sequence of other major cmdlets that you must run to prepare for your call to
New-AzSqlSer verVir tualNetworkRule . In this article, these calls occur in script 3 "Virtual network rule":
1. New-AzVirtualNetworkSubnetConfig: Creates a subnet object.
2. New-AzVirtualNetwork: Creates your virtual network, giving it the subnet.
3. Set-AzVirtualNetworkSubnetConfig: Assigns a Virtual Service endpoint to your subnet.
4. Set-AzVirtualNetwork: Persists updates made to your virtual network.
5. New-AzSqlServerVirtualNetworkRule: After your subnet is an endpoint, adds your subnet as a virtual
network rule, into the ACL of your server.
This cmdlet Offers the parameter -IgnoreMissingVNetSer viceEndpoint , starting in Azure RM
PowerShell Module version 5.1.1.

Prerequisites for running PowerShell


You can already log in to Azure, such as through the Azure portal.
You can already run PowerShell scripts.

NOTE
Please ensure that service endpoints are turned on for the VNet/Subnet that you want to add to your Server otherwise
creation of the VNet Firewall Rule will fail.

One script divided into four chunks


Our demonstration PowerShell script is divided into a sequence of smaller scripts. The division eases learning
and provides flexibility. The scripts must be run in their indicated sequence. If you do not have time now to run
the scripts, our actual test output is displayed after script 4.

Script 1: Variables
This first PowerShell script assigns values to variables. The subsequent scripts depend on these variables.

IMPORTANT
Before you run this script, you can edit the values, if you like. For example, if you already have a resource group, you
might want to edit your resource group name as the assigned value.
Your subscription name should be edited into the script.

PowerShell script 1 source code


######### Script 1 ########################################
## LOG into to your Azure account. ##
## (Needed only one time per powershell.exe session.) ##
###########################################################

$yesno = Read-Host 'Do you need to log into Azure (only one time per powershell.exe session)? [yes/no]'
if ('yes' -eq $yesno) { Connect-AzAccount }

###########################################################
## Assignments to variables used by the later scripts. ##
###########################################################

# You can edit these values, if necessary.


$SubscriptionName = 'yourSubscriptionName'
Select-AzSubscription -SubscriptionName $SubscriptionName

$ResourceGroupName = 'RG-YourNameHere'
$Region = 'westcentralus'

$VNetName = 'myVNet'
$SubnetName = 'mySubnet'
$VNetAddressPrefix = '10.1.0.0/16'
$SubnetAddressPrefix = '10.1.1.0/24'
$VNetRuleName = 'myFirstVNetRule-ForAcl'

$SqlDbServerName = 'mysqldbserver-forvnet'
$SqlDbAdminLoginName = 'ServerAdmin'
$SqlDbAdminLoginPassword = 'ChangeYourAdminPassword1'

$ServiceEndpointTypeName_SqlDb = 'Microsoft.Sql' # Official type name.

Write-Host 'Completed script 1, the "Variables".'

Script 2: Prerequisites
This script prepares for the next script, where the endpoint action is. This script creates for you the following
listed items, but only if they do not already exist. You can skip script 2 if you are sure these items already exist:
Azure resource group
Logical SQL server
PowerShell script 2 source code
######### Script 2 ########################################
## Ensure your Resource Group already exists. ##
###########################################################

Write-Host "Check whether your Resource Group already exists."

$gottenResourceGroup = $null
$gottenResourceGroup = Get-AzResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue

if ($null -eq $gottenResourceGroup) {


Write-Host "Creating your missing Resource Group - $ResourceGroupName."
New-AzResourceGroup -Name $ResourceGroupName -Location $Region
} else {
Write-Host "Good, your Resource Group already exists - $ResourceGroupName."
}

$gottenResourceGroup = $null

###########################################################
## Ensure your server already exists. ##
###########################################################

Write-Host "Check whether your server already exists."

$sqlDbServer = $null
$azSqlParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
ErrorAction = 'SilentlyContinue'
}
$sqlDbServer = Get-AzSqlServer @azSqlParams

if ($null -eq $sqlDbServer) {


Write-Host "Creating the missing server - $SqlDbServerName."
Write-Host "Gather the credentials necessary to next create a server."

$sqlAdministratorCredentials = [pscredential]::new($SqlDbAdminLoginName,(ConvertTo-SecureString -String


$SqlDbAdminLoginPassword -AsPlainText -Force))

if ($null -eq $sqlAdministratorCredentials) {


Write-Host "ERROR, unable to create SQL administrator credentials. Now ending."
return
}

Write-Host "Create your server."

$sqlSrvParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
Location = $Region
SqlAdministratorCredentials = $sqlAdministratorCredentials
}
New-AzSqlServer @sqlSrvParams
} else {
Write-Host "Good, your server already exists - $SqlDbServerName."
}

$sqlAdministratorCredentials = $null
$sqlDbServer = $null

Write-Host 'Completed script 2, the "Prerequisites".'

Script 3: Create an endpoint and a rule


This script creates a virtual network with a subnet. Then the script assigns the Microsoft.Sql endpoint type to
your subnet. Finally the script adds your subnet to the access control list (ACL), thereby creating a rule.
PowerShell script 3 source code

######### Script 3 ########################################


## Create your virtual network, and give it a subnet. ##
###########################################################

Write-Host "Define a subnet '$SubnetName', to be given soon to a virtual network."

$subnetParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$subnet = New-AzVirtualNetworkSubnetConfig @subnetParams

Write-Host "Create a virtual network '$VNetName'.`nGive the subnet to the virtual network that we created."

$vnetParams = @{
Name = $VNetName
AddressPrefix = $VNetAddressPrefix
Subnet = $subnet
ResourceGroupName = $ResourceGroupName
Location = $Region
}
$vnet = New-AzVirtualNetwork @vnetParams

###########################################################
## Create a Virtual Service endpoint on the subnet. ##
###########################################################

Write-Host "Assign a Virtual Service endpoint 'Microsoft.Sql' to the subnet."

$vnetSubParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
VirtualNetwork = $vnet
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$vnet = Set-AzVirtualNetworkSubnetConfig @vnetSubParams

Write-Host "Persist the updates made to the virtual network > subnet."

$vnet = Set-AzVirtualNetwork -VirtualNetwork $vnet

$vnet.Subnets[0].ServiceEndpoints # Display the first endpoint.

###########################################################
## Add the Virtual Service endpoint Id as a rule, ##
## into SQL Database ACLs. ##
###########################################################

Write-Host "Get the subnet object."

$vnet = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName

$subnet = Get-AzVirtualNetworkSubnetConfig -Name $SubnetName -VirtualNetwork $vnet

Write-Host "Add the subnet .Id as a rule, into the ACLs for your server."

$ruleParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
VirtualNetworkSubnetId = $subnet.Id
}
New-AzSqlServerVirtualNetworkRule @ruleParams
Write-Host "Verify that the rule is in the SQL Database ACL."

$rule2Params = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
}
Get-AzSqlServerVirtualNetworkRule @rule2Params

Write-Host 'Completed script 3, the "Virtual-Network-Rule".'

Script 4: Clean-up
This final script deletes the resources that the previous scripts created for the demonstration. However, the script
asks for confirmation before it deletes the following:
Logical SQL server
Azure Resource Group
You can run script 4 any time after script 1 completes.
PowerShell script 4 source code
######### Script 4 ########################################
## Clean-up phase A: Unconditional deletes. ##
## ##
## 1. The test rule is deleted from SQL Database ACL. ##
## 2. The test endpoint is deleted from the subnet. ##
## 3. The test virtual network is deleted. ##
###########################################################

Write-Host "Delete the rule from the SQL Database ACL."

$removeParams = @{
ResourceGroupName = $ResourceGroupName
ServerName = $SqlDbServerName
VirtualNetworkRuleName = $VNetRuleName
ErrorAction = 'SilentlyContinue'
}
Remove-AzSqlServerVirtualNetworkRule @removeParams

Write-Host "Delete the endpoint from the subnet."

$vnet = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName

Remove-AzVirtualNetworkSubnetConfig -Name $SubnetName -VirtualNetwork $vnet

Write-Host "Delete the virtual network (thus also deletes the subnet)."

$removeParams = @{
Name = $VNetName
ResourceGroupName = $ResourceGroupName
ErrorAction = 'SilentlyContinue'
}
Remove-AzVirtualNetwork @removeParams

###########################################################
## Clean-up phase B: Conditional deletes. ##
## ##
## These might have already existed, so user might ##
## want to keep. ##
## ##
## 1. Logical SQL server ##
## 2. Azure resource group ##
###########################################################

$yesno = Read-Host 'CAUTION !: Do you want to DELETE your server AND your resource group? [yes/no]'
if ('yes' -eq $yesno) {
Write-Host "Remove the server."

$removeParams = @{
ServerName = $SqlDbServerName
ResourceGroupName = $ResourceGroupName
ErrorAction = 'SilentlyContinue'
}
Remove-AzSqlServer @removeParams

Write-Host "Remove the Azure Resource Group."

Remove-AzResourceGroup -Name $ResourceGroupName -ErrorAction SilentlyContinue


} else {
Write-Host "Skipped over the DELETE of SQL Database and resource group."
}

Write-Host 'Completed script 4, the "Clean-Up".'

Verify your subnet is an endpoint


You might have a subnet that was already assigned the Microsoft.Sql type name, meaning it is already a
Virtual Service endpoint. You could use the Azure portal to create a virtual network rule from the endpoint.
Or, you might be unsure whether your subnet has the Microsoft.Sql type name. You can run the following
PowerShell script to take these actions:
1. Ascertain whether your subnet has the Microsoft.Sql type name.
2. Optionally, assign the type name if it is absent.
The script asks you to confirm, before it applies the absent type name.
Phases of the script
Here are the phases of the PowerShell script:
1. LOG into to your Azure account, needed only once per PS session. Assign variables.
2. Search for your virtual network, and then for your subnet.
3. Is your subnet tagged as Microsoft.Sql endpoint server type?
4. Add a Virtual Service endpoint of type name Microsoft.Sql , on your subnet.

IMPORTANT
Before you run this script, you must edit the values assigned to the $-variables, near the top of the script.

Direct PowerShell source code


This PowerShell script does not update anything, unless you respond yes if is asks you for confirmation. The
script can add the type name Microsoft.Sql to your subnet. But the script tries the add only if your subnet lacks
the type name.

### 1. LOG into to your Azure account, needed only once per PS session. Assign variables.
$yesno = Read-Host 'Do you need to log into Azure (only one time per powershell.exe session)? [yes/no]'
if ('yes' -eq $yesno) { Connect-AzAccount }

# Assignments to variables used by the later scripts.


# You can EDIT these values, if necessary.

$SubscriptionName = 'yourSubscriptionName'
Select-AzSubscription -SubscriptionName "$SubscriptionName"

$ResourceGroupName = 'yourRGName'
$VNetName = 'yourVNetName'
$SubnetName = 'yourSubnetName'
$SubnetAddressPrefix = 'Obtain this value from the Azure portal.' # Looks roughly like: '10.0.0.0/24'

$ServiceEndpointTypeName_SqlDb = 'Microsoft.Sql' # Do NOT edit. Is official value.

### 2. Search for your virtual network, and then for your subnet.
# Search for the virtual network.
$vnet = $null
$vnet = Get-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Name $VNetName

if ($vnet -eq $null) {


Write-Host "Caution: No virtual network found by the name '$VNetName'."
return
}

$subnet = $null
for ($nn = 0; $nn -lt $vnet.Subnets.Count; $nn++) {
$subnet = $vnet.Subnets[$nn]
if ($subnet.Name -eq $SubnetName) { break }
$subnet = $null
}
if ($null -eq $subnet) {
Write-Host "Caution: No subnet found by the name '$SubnetName'"
Return
}

### 3. Is your subnet tagged as 'Microsoft.Sql' endpoint server type?


$endpointMsSql = $null
for ($nn = 0; $nn -lt $subnet.ServiceEndpoints.Count; $nn++) {
$endpointMsSql = $subnet.ServiceEndpoints[$nn]
if ($endpointMsSql.Service -eq $ServiceEndpointTypeName_SqlDb) {
$endpointMsSql
break
}
$endpointMsSql = $null
}

if ($null -eq $endpointMsSql) {


Write-Host "Good: Subnet found, and is already tagged as an endpoint of type
'$ServiceEndpointTypeName_SqlDb'."
return
} else {
Write-Host "Caution: Subnet found, but not yet tagged as an endpoint of type
'$ServiceEndpointTypeName_SqlDb'."

# Ask the user for confirmation.


$yesno = Read-Host 'Do you want the PS script to apply the endpoint type name to your subnet? [yes/no]'
if ('no' -eq $yesno) { return }
}

### 4. Add a Virtual Service endpoint of type name 'Microsoft.Sql', on your subnet.
$setParams = @{
Name = $SubnetName
AddressPrefix = $SubnetAddressPrefix
VirtualNetwork = $vnet
ServiceEndpoint = $ServiceEndpointTypeName_SqlDb
}
$vnet = Set-AzVirtualNetworkSubnetConfig @setParams

# Persist the subnet update.


$vnet = Set-AzVirtualNetwork -VirtualNetwork $vnet

for ($nn = 0; $nn -lt $vnet.Subnets.Count; $nn++) {


$vnet.Subnets[0].ServiceEndpoints # Display.
}
Manage Azure SQL Database long-term backup
retention
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


With Azure SQL Database, you can set a long-term backup retention policy (LTR) to automatically retain backups
in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these
backups using the Azure portal, Azure CLI, or PowerShell. Long-term retention policies are also supported for
Azure SQL Managed Instance.

Prerequisites
Portal
Azure CLI
PowerShell

An active Azure subscription.

Create long-term retention policies


Portal
Azure CLI
PowerShell

You can configure SQL Database to retain automated backups for a period longer than the retention period for
your service tier.
1. In the Azure portal, navigate to your server and then select Backups . Select the Retention policies tab
to modify your backup retention settings.

2. On the Retention policies tab, select the database(s) on which you want to set or modify long-term
backup retention policies. Unselected databases will not be affected.
3. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.

4. Select Apply to apply the chosen retention settings to all selected databases.

IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadance, see long-term backup retention.
View backups and restore from a backup
View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
Portal
Azure CLI
PowerShell

1. In the Azure portal, navigate to your server and then select Backups . To view the available LTR backups
for a specific database, select Manage under the Available LTR backups column. A pane will appear with
a list of the available LTR backups for the selected database.

2. In the Available LTR backups pane that appears, review the available backups. You may select a backup
to restore from or to delete.

3. To restore from an available LTR backup, select the backup from which you want to restore, and then
select Restore .
4. Choose a name for your new database, then select Review + Create to review the details of your
Restore. Select Create to restore your database from the chosen backup.

5. On the toolbar, select the notification icon to view the status of the restore job.

6. When the restore job is completed, open the SQL databases page to view the newly restored database.

NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.

Limitations
When restoring from an LTR backup, the read scale property is disabled. To enable, read scale on the restored
database, update the database after it has been created.
You need to specify the target service level objective, when restoring from an LTR backup, which was created
when the database was in an elastic pool.

Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Configure an auto-failover group for Azure SQL
Database
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This topic teaches you how to configure an auto-failover group for single and pooled databases in Azure SQL
Database by using the Azure portal and Azure PowerShell. For an end-to-end experience, review the Auto-
failover group tutorial.

NOTE
This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see Configure auto-
failover groups in Azure SQL Managed Instance.

Prerequisites
Consider the following prerequisites for creating your failover group for a single database:
The server login and firewall settings for the secondary server must match that of your primary server.

Create failover group


Portal
PowerShell

Create your failover group and add your single database to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to favorite
it and add it as an item in the left-hand navigation.
2. Select the database you want to add to the failover group.
3. Select the name of the server under Ser ver name to open the settings for the server.

4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.
5. On the Failover Group page, enter or select the required values, and then select Create .
Databases within the group : Choose the database you want to add to your failover group. Adding
the database to the failover group will automatically start the geo-replication process.

Test failover
Test failover of your failover group using the Azure portal or PowerShell.

Portal
PowerShell

Test failover of your failover group using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select the database you want to add to the failover group.

3. Select Failover groups under the Settings pane and then choose the failover group you just created.
4. Review which server is primary and which server is secondary.
5. Select Failover from the task pane to fail over your failover group containing your database.
6. Select Yes on the warning that notifies you that TDS sessions will be disconnected.

7. Review which server is now primary and which server is secondary. If failover succeeded, the two servers
should have swapped roles.
8. Select Failover again to fail the servers back to their original roles.
IMPORTANT
If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary
database before it is removed from the failover group can cause unpredictable behavior.

Prerequisites
Consider the following prerequisites for creating your failover group for a pooled database:
The server login and firewall settings for the secondary server must match that of your primary server.

Create failover group


Create the failover group for your elastic pool using the Azure portal or PowerShell.

Portal
PowerShell

Create your failover group and add your elastic pool to it using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select the elastic pool you want to add to the failover group.
3. On the Over view pane, select the name of the server under Ser ver name to open the settings for the
server.

4. Select Failover groups under the Settings pane, and then select Add group to create a new failover
group.

5. On the Failover Group page, enter or select the required values, and then select Create . Either create a
new secondary server, or select an existing secondary server.
6. Select Databases within the group then choose the elastic pool you want to add to the failover group.
If an elastic pool does not already exist on the secondary server, a warning appears prompting you to
create an elastic pool on the secondary server. Select the warning, and then select OK to create the elastic
pool on the secondary server.

7. Select Select to apply your elastic pool settings to the failover group, and then select Create to create
your failover group. Adding the elastic pool to the failover group will automatically start the geo-
replication process.

Test failover
Test failover of your elastic pool using the Azure portal or PowerShell.

Portal
PowerShell

Fail your failover group over to the secondary server, and then fail back using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type "Azure SQL" in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select the elastic pool you want to add to the failover group.
3. On the Over view pane, select the name of the server under Ser ver name to open the settings for the
server.

4. Select Failover groups under the Settings pane and then choose the failover group you created in
section 2.
5. Review which server is primary, and which server is secondary.
6. Select Failover from the task pane to fail over your failover group containing your elastic pool.
7. Select Yes on the warning that notifies you that TDS sessions will be disconnected.

8. Review which server is primary, which server is secondary. If failover succeeded, the two servers should
have swapped roles.
9. Select Failover again to fail the failover group back to the original settings.
IMPORTANT
If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary
database before it is removed from the failover group can cause unpredictable behavior.

Use Private Link


Using a private link allows you to associate a logical server to a specific private IP address within the virtual
network and subnet.
To use a private link with your failover group, do the following:
1. Ensure your primary and secondary servers are in a paired region.
2. Create the virtual network and subnet in each region to host private endpoints for primary and secondary
servers such that they have non-overlapping IP address spaces. For example, the primary virtual network
address range of 10.0.0.0/16 and the secondary virtual network address range of 10.0.0.1/16 overlaps. For
more information about virtual network address ranges, see the blog designing Azure virtual networks.
3. Create a private endpoint and Azure Private DNS zone for the primary server.
4. Create a private endpoint for the secondary server as well, but this time choose to reuse the same Private
DNS zone that was created for the primary server.
5. Once the private link is established, you can create the failover group following the steps outlined previously
in this article.

Locate listener endpoint


Once your failover group is configured, update the connection string for your application to the listener
endpoint. This will keep your application connected to the failover group listener, rather than the primary
database, elastic pool, or instance database. That way, you don't have to manually update the connection string
every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
The listener endpoint is in the form of fog-name.database.windows.net , and is visible in the Azure portal, when
viewing the failover group:
Change the secondary region
To illustrate the change sequence, we will assume that server A is the primary server, server B is the existing
secondary server, and server C is the new secondary in the third region. To make the transition, follow these
steps:
1. Create additional secondaries of each database on server A to server C using active geo-replication. Each
database on server A will have two secondaries, one on server B and one on server C. This will guarantee
that the primary databases remain protected during the transition.
2. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
3. Re-create the failover group with the same name between servers A and C.
4. Add all primary databases on server A to the new failover group. At this point the login attempts will stop
failing.
5. Delete server B. All databases on B will be deleted automatically.

Change the primary region


To illustrate the change sequence, we will assume server A is the primary server, server B is the existing
secondary server, and server C is the new primary in the third region. To make the transition, follow these steps:
1. Perform a planned geo-failover to switch the primary server to B. Server A will become the new secondary
server. The failover may result in several minutes of downtime. The actual time will depend on the size of
failover group.
2. Create additional secondaries of each database on server B to server C using active geo-replication. Each
database on server B will have two secondaries, one on server A and one on server C. This will guarantee
that the primary databases remain protected during the transition.
3. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
4. Re-create the failover group with the same name between servers B and C.
5. Add all primary databases on B to the new failover group. At this point the login attempts will stop failing.
6. Perform a planned geo-failover of the failover group to switch B and C. Now server C will become the
primary and B the secondary. All secondary databases on server A will be automatically linked to the
primaries on C. As in step 1, the failover may result in several minutes of downtime.
7. Delete server A. All databases on A will be deleted automatically.

IMPORTANT
When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a
non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because
failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To
minimize this risk, don't use generic failover group names.

Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Server Contributor role
has all the necessary permissions to manage failover groups.
The following table lists specific permission scopes for Azure SQL Database:

A C T IO N P ERM ISSIO N SC O P E

Create failover group Azure RBAC write access Primary server


Secondary server
All databases in failover group

Update failover group Azure RBAC write access Failover group


All databases on the current primary
server

Fail over failover group Azure RBAC write access Failover group on new server

Remarks
Removing a failover group for a single or pooled database does not stop replication, and it does not delete
the replicated database. You will need to manually stop geo-replication and delete the database from the
secondary server if you want to add a single or pooled database back to a failover group after it's been
removed. Failing to do either may result in an error similar to
The operation cannot be performed due to multiple errors when attempting to add the database to the
failover group.
Auto-failover group name is subject to naming restrictions.

Next steps
For detailed steps configuring a failover group, see the following tutorials:
Add a single database to a failover group
Add an elastic pool to a failover group
Add a managed instance to a failover group
For an overview of Azure SQL Database high availability options, see geo-replication and auto-failover groups.
Tutorial: Configure active geo-replication and
failover (Azure SQL Database)
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article shows you how to configure active geo-replication for Azure SQL Database using the Azure portal or
Azure CLI and to initiate failover.
For best practices using auto-failover groups, see Auto-failover groups with Azure SQL Database and Auto-
failover groups with Azure SQL Managed Instance.

Prerequisites
Portal
Azure CLI

To configure active geo-replication by using the Azure portal, you need the following resource:
A database in Azure SQL Database: The primary database that you want to replicate to a different
geographical region.

NOTE
When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a
secondary database is required to be in a different subscription, use Create Database REST API or ALTER DATABASE
Transact-SQL API.

Add a secondary database


The following steps create a new secondary database in a geo-replication partnership.
To add a secondary database, you must be the subscription owner or co-owner.
The secondary database has the same name as the primary database and has, by default, the same service tier
and compute size. The secondary database can be a single database or a pooled database. For more
information, see DTU-based purchasing model and vCore-based purchasing model. After the secondary is
created and seeded, data begins replicating from the primary database to the new secondary database.

NOTE
If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the
command fails.

Portal
Azure CLI

1. In the Azure portal, browse to the database that you want to set up for geo-replication.
2. On the SQL Database page, select your database, scroll to Data management , select Replicas , and then
select Create replica .

3. Select or create the server for the secondary database, and configure the Compute + storage options if
necessary. You can select any region for your secondary server, but we recommend the paired region.

Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a
pool, select Yes next to Want to use SQL elastic pool? and select a pool on the target server. A pool
must already exist on the target server. This workflow doesn't create a pool.
4. Click Review + create , review the information, and then click Create .
5. The secondary database is created and the deployment process begins.
6. When the deployment is complete, the secondary database displays its status.

7. Return to the primary database page, and then select Replicas . Your secondary database is listed under
Geo replicas .

Initiate a failover
The secondary database can be switched to become the primary.

Portal
Azure CLI

1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Scroll to Data management , and then select Replicas .
3. In the Geo replicas list, select the database you want to become the new primary, select the ellipsis, and
then select Forced failover .

4. Select Yes to begin the failover.


The command immediately switches the secondary database into the primary role. This process normally
should complete within 30 seconds or less.
There's a short period during which both databases are unavailable, on the order of 0 to 25 seconds, while the
roles are switched. If the primary database has multiple secondary databases, the command automatically
reconfigures the other secondaries to connect to the new primary. The entire operation should take less than a
minute to complete under normal circumstances.
NOTE
This command is designed for quick recovery of the database in case of an outage. It triggers failover without data
synchronization, or forced failover. If the primary is online and committing transactions when the command is issued
some data loss may occur.

Remove secondary database


This operation permanently stops the replication to the secondary database, and changes the role of the
secondary to a regular read-write database. If the connectivity to the secondary database is broken, the
command succeeds but the secondary doesn't become read-write until after connectivity is restored.

Portal
Azure CLI

1. In the Azure portal, browse to the primary database in the geo-replication partnership.
2. Select Replicas .
3. In the Geo replicas list, select the database you want to remove from the geo-replication partnership,
select the ellipsis, and then select Stop replication .

4. A confirmation window opens. Click Yes to remove the database from the geo-replication partnership.
(Set it to a read-write database not part of any replication.)

Next steps
To learn more about active geo-replication, see active geo-replication.
To learn about auto-failover groups, see Auto-failover groups
For a business continuity overview and scenarios, see Business continuity overview.
Configure and manage Azure SQL Database
security for geo-restore or failover
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes the authentication requirements to configure and control active geo-replication and auto-
failover groups. It also provides the steps required to set up user access to the secondary database. Finally, it
also describes how to enable access to the recovered database after using geo-restore. For more information on
recovery options, see Business Continuity Overview.

Disaster recovery with contained users


Unlike traditional users, which must be mapped to logins in the master database, a contained user is managed
completely by the database itself. This has two benefits. In the disaster recovery scenario, the users can continue
to connect to the new primary database or the database recovered using geo-restore without any additional
configuration, because the database manages the users. There are also potential scalability and performance
benefits from this configuration from a login perspective. For more information, see Contained Database Users -
Making Your Database Portable.
The main trade-off is that managing the disaster recovery process at scale is more challenging. When you have
multiple databases that use the same login, maintaining the credentials using contained users in multiple
databases may negate the benefits of contained users. For example, the password rotation policy requires that
changes be made consistently in multiple databases rather than changing the password for the login once in the
master database. For this reason, if you have multiple databases that use the same user name and password,
using contained users is not recommended.

How to configure logins and users


If you are using logins and users (rather than contained users), you must take extra steps to ensure that the
same logins exist in the master database. The following sections outline the steps involved and additional
considerations.

NOTE
It is also possible to use Azure Active Directory (AAD) logins to manage your databases. For more information, see Azure
SQL logins and users.

Set up user access to a secondary or recovered database


In order for the secondary database to be usable as a read-only secondary database, and to ensure proper
access to the new primary database or the database recovered using geo-restore, the master database of the
target server must have the appropriate security configuration in place before the recovery.
The specific permissions for each step are described later in this topic.
Preparing user access to a geo-replication secondary should be performed as part configuring geo-replication.
Preparing user access to the geo-restored databases should be performed at any time when the original server
is online (e.g. as part of the DR drill).
NOTE
If you fail over or geo-restore to a server that does not have properly configured logins, access to it will be limited to the
server admin account.

Setting up logins on the target server involves three steps outlined below:
1. Determine logins with access to the primary database
The first step of the process is to determine which logins must be duplicated on the target server. This is
accomplished with a pair of SELECT statements, one in the logical master database on the source server and one
in the primary database itself.
Only the server admin or a member of the LoginManager server role can determine the logins on the source
server with the following SELECT statement.

SELECT [name], [sid]


FROM [sys].[sql_logins]
WHERE [type_desc] = 'SQL_Login'

Only a member of the db_owner database role, the dbo user, or server admin, can determine all of the database
user principals in the primary database.

SELECT [name], [sid]


FROM [sys].[database_principals]
WHERE [type_desc] = 'SQL_USER'

2. Find the SID for the logins identified in step 1


By comparing the output of the queries from the previous section and matching the SIDs, you can map the
server login to database user. Logins that have a database user with a matching SID have user access to that
database as that database user principal.
The following query can be used to see all of the user principals and their SIDs in a database. Only a member of
the db_owner database role or server admin can run this query.

SELECT [name], [sid]


FROM [sys].[database_principals]
WHERE [type_desc] = 'SQL_USER'

NOTE
The INFORMATION_SCHEMA and sys users have NULL SIDs, and the guest SID is 0x00 . The dbo SID may start with
0x01060000000001648000000000048454, if the database creator was the server admin instead of a member of
DbManager .

3. Create the logins on the target server


The last step is to go to the target server, or servers, and generate the logins with the appropriate SIDs. The basic
syntax is as follows.

CREATE LOGIN [<login name>]


WITH PASSWORD = '<login password>',
SID = 0x1234 /*replace 0x1234 with the desired login SID*/
NOTE
If you want to grant user access to the secondary, but not to the primary, you can do that by altering the user login on
the primary server by using the following syntax.

ALTER LOGIN [<login name>] DISABLE

DISABLE doesn’t change the password, so you can always enable it if needed.

Next steps
For more information on managing database access and logins, see SQL Database security: Manage
database access and login security.
For more information on contained database users, see Contained Database Users - Making Your Database
Portable.
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about using geo-restore, see geo-restore
Query Performance Insight for Azure SQL Database
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Query Performance Insight provides intelligent query analysis for single and pooled databases. It helps identify
the top resource consuming and long-running queries in your workload. This helps you find the queries to
optimize to improve overall workload performance and efficiently use the resource that you are paying for.
Query Performance Insight helps you spend less time troubleshooting database performance by providing:
Deeper insight into your databases resource (DTU) consumption
Details on top database queries by CPU, duration, and execution count (potential tuning candidates for
performance improvements)
The ability to drill down into details of a query, to view the query text and history of resource utilization
Annotations that show performance recommendations from database advisors

Prerequisites
Query Performance Insight requires that Query Store is active on your database. It's automatically enabled for
all databases in Azure SQL Database by default. If Query Store is not running, the Azure portal will prompt you
to enable it.

NOTE
If the "Query Store is not properly configured on this database" message appears in the portal, see Optimizing the Query
Store configuration.

Permissions
You need the following Azure role-based access control (Azure RBAC) permissions to use Query Performance
Insight:
Reader , Owner , Contributor , SQL DB Contributor , or SQL Ser ver Contributor permissions are
required to view the top resource-consuming queries and charts.
Owner , Contributor , SQL DB Contributor , or SQL Ser ver Contributor permissions are required to
view query text.

Use Query Performance Insight


Query Performance Insight is easy to use:
1. Open the Azure portal and find a database that you want to examine.
2. From the left-side menu, open Intelligent Performance > Quer y Performance Insight .

3. On the first tab, review the list of top resource-consuming queries.


4. Select an individual query to view its details.
5. Open Intelligent Performance > Performance recommendations and check if any performance
recommendations are available. For more information on built-in performance recommendations, see
Azure SQL Database Advisor.
6. Use sliders or zoom icons to change the observed interval.
NOTE
For Azure SQL Database to render the information in Query Performance Insight, Query Store needs to capture a couple
hours of data. If the database has no activity or if Query Store was not active during a certain period, the charts will be
empty when Query Performance Insight displays that time range. You can enable Query Store at any time if it's not
running. For more information, see Best practices with Query Store.

For database performance recommendations, select Recommendations on the Query Performance Insight
navigation blade.

Review top CPU-consuming queries


By default, Query Performance Insight shows the top five CPU-consuming queries when you first open it.
1. Select or clear individual queries to include or exclude them from the chart by using check boxes.
The top line shows overall DTU percentage for the database. The bars show CPU percentage that the
selected queries consumed during the selected interval. For example, if Past week is selected, each bar
represents a single day.
IMPORTANT
The DTU line shown is aggregated to a maximum consumption value in one-hour periods. It's meant for a high-
level comparison only with query execution statistics. In some cases, DTU utilization might seem too high
compared to executed queries, but this might not be the case.
For example, if a query maxed out DTU to 100% for a few minutes only, the DTU line in Query Performance
Insight will show the entire hour of consumption as 100% (the consequence of the maximum aggregated value).
For a finer comparison (up to one minute), consider creating a custom DTU utilization chart:
1. In the Azure portal, select Azure SQL Database > Monitoring .
2. Select Metrics .
3. Select +Add char t .
4. Select the DTU percentage on the chart.
5. In addition, select Last 24 hours on the upper-left menu and change it to one minute.
Use the custom DTU chart with a finer level of details to compare with the query execution chart.

The bottom grid shows aggregated information for the visible queries:
Query ID, which is a unique identifier for the query in the database.
CPU per query during an observable interval, which depends on the aggregation function.
Duration per query, which also depends on the aggregation function.
Total number of executions for a specific query.
2. If your data becomes stale, select the Refresh button.
3. Use sliders and zoom buttons to change the observation interval and investigate consumption spikes:

4. Optionally, you can select the Custom tab to customize the view for:
Metric (CPU, duration, execution count).
Time interval (last 24 hours, past week, or past month).
Number of queries.
Aggregation function.

5. Select the Go > button to see the customized view.


IMPORTANT
Query Performance Insight is limited to displaying the top 5-20 consuming queries, depending on your selection.
Your database can run many more queries beyond the top ones shown, and these queries will not be included on
the chart.
There might exist a database workload type in which lots of smaller queries, beyond the top ones shown, run
frequently and use the majority of DTU. These queries don't appear on the performance chart.
For example, a query might have consumed a substantial amount of DTU for a while, although its total
consumption in the observed period is less than the other top-consuming queries. In such a case, resource
utilization of this query would not appear on the chart.
If you need to understand top query executions beyond the limitations of Query Performance Insight, consider
using Azure SQL Insights for advanced database performance monitoring and troubleshooting.

View individual query details


To view query details:
1. Select any query in the list of top queries.

A detailed view opens. It shows the CPU consumption, duration, and execution count over time.
2. Select the chart features for details.
The top chart shows a line with the overall database DTU percentage. The bars are the CPU percentage
that the selected query consumed.
The second chart shows the total duration of the selected query.
The bottom chart shows the total number of executions by the selected query.
3. Optionally, use sliders, use zoom buttons, or select Settings to customize how query data is displayed, or
to pick a different time range.

IMPORTANT
Query Performance Insight does not capture any DDL queries. In some cases, it might not capture all ad hoc
queries.
In case your database is scope locked with a read-only lock, query details blade will not be able to load.

Review top queries per duration


Two metrics in Query Performance Insight can help you find potential bottlenecks: duration and execution count.
Long-running queries have the greatest potential for locking resources longer, blocking other users, and limiting
scalability. They're also the best candidates for optimization. For more information, see Understand and resolve
Azure SQL blocking problems.
To identify long-running queries:
1. Open the Custom tab in Query Performance Insight for the selected database.
2. Change the metrics to duration .
3. Select the number of queries and the observation interval.
4. Select the aggregation function:
Sum adds up all query execution time for the whole observation interval.
Max finds queries in which execution time was maximum for the whole observation interval.
Avg finds the average execution time of all query executions and shows you the top ones for these
averages.

5. Select the Go > button to see the customized view.

IMPORTANT
Adjusting the query view does not update the DTU line. The DTU line always shows the maximum consumption
value for the interval.
To understand database DTU consumption with more detail (up to one minute), consider creating a custom chart
in the Azure portal:
1. Select Azure SQL Database > Monitoring .
2. Select Metrics .
3. Select +Add char t .
4. Select the DTU percentage on the chart.
5. In addition, select Last 24 hours on the upper-left menu and change it to one minute.
We recommend that you use the custom DTU chart to compare with the query performance chart.

Review top queries per execution count


A user application that uses the database might get slow, even though a high number of executions might not
be affecting the database itself and resources usage is low.
In some cases, a high execution count can lead to more network round trips. Round trips affect performance.
They're subject to network latency and to downstream server latency.
For example, many data-driven websites heavily access the database for every user request. Although
connection pooling helps, the increased network traffic and processing load on the server can slow
performance. In general, keep round trips to a minimum.
To identify frequently executed ("chatty") queries:
1. Open the Custom tab in Query Performance Insight for the selected database.
2. Change the metrics to execution count .
3. Select the number of queries and the observation interval.
4. Select the Go > button to see the customized view.

Understand performance tuning annotations


While exploring your workload in Query Performance Insight, you might notice icons with a vertical line on top
of the chart.
These icons are annotations. They show performance recommendations from Azure SQL Database Advisor. By
hovering over an annotation, you can get summarized information on performance recommendations.
If you want to understand more or apply the advisor's recommendation, select the icon to open details of the
recommended action. If this is an active recommendation, you can apply it right away from the portal.

In some cases, due to the zoom level, it's possible that annotations close to each other are collapsed into a single
annotation. Query Performance Insight represents this as a group annotation icon. Selecting the group
annotation icon opens a new blade that lists the annotations.
Correlating queries and performance-tuning actions might help you to better understand your workload.

Optimize the Query Store configuration


While using Query Performance Insight, you might see the following Query Store error messages:
"Query Store is not properly configured on this database. Click here to learn more."
"Query Store is not properly configured on this database. Click here to change settings."
These messages usually appear when Query Store can't collect new data.
The first case happens when Query Store is in the read-only state and parameters are set optimally. You can fix
this by increasing the size of the data store, or by clearing Query Store. (If you clear Query Store, all previously
collected telemetry will be lost.)
The second case happens when Query Store is not enabled, or parameters are not set optimally. You can change
the retention and capture policy, and also enable Query Store, by running the following commands provided
from SQL Server Management Studio (SSMS) or the Azure portal.
Recommended retention and capture policy
There are two types of retention policies:
Size based : If this policy is set to AUTO , it will clean data automatically when near maximum size is reached.
Time based : By default, this policy is set to 30 days. If Query Store runs out of space, it will delete query
information older than 30 days.
You can set the capture policy to:
All : Query Store captures all queries.
Auto : Query Store ignores infrequent queries and queries with insignificant compile and execution duration.
Thresholds for execution count, compile duration, and runtime duration are internally determined. This is the
default option.
None : Query Store stops capturing new queries, but runtime statistics for already captured queries are still
collected.
We recommend setting all policies to AUTO and the cleaning policy to 30 days by executing the following
commands from SSMS or the Azure portal. (Replace YourDB with the database name.)

ALTER DATABASE [YourDB]


SET QUERY_STORE (SIZE_BASED_CLEANUP_MODE = AUTO);

ALTER DATABASE [YourDB]


SET QUERY_STORE (CLEANUP_POLICY = (STALE_QUERY_THRESHOLD_DAYS = 30));

ALTER DATABASE [YourDB]


SET QUERY_STORE (QUERY_CAPTURE_MODE = AUTO);

Increase the size of Query Store by connecting to a database through SSMS or the Azure portal and running the
following query. (Replace YourDB with the database name.)
ALTER DATABASE [YourDB]
SET QUERY_STORE (MAX_STORAGE_SIZE_MB = 1024);

Applying these settings will eventually make Query Store collect telemetry for new queries. If you need Query
Store to be operational right away, you can optionally choose to clear Query Store by running the following
query through SSMS or the Azure portal. (Replace YourDB with the database name.)

NOTE
Running the following query will delete all previously collected monitored telemetry in Query Store.

ALTER DATABASE [YourDB] SET QUERY_STORE CLEAR;

Next steps
Consider using Azure SQL Analytics for advanced performance monitoring of a large fleet of single and pooled
databases, elastic pools, managed instances and instance databases.
Enable automatic tuning in the Azure portal to
monitor queries and improve workload
performance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database automatically manages data services that constantly monitor your queries and identifies
the action that you can perform to improve performance of your workload. You can review recommendations
and manually apply them, or let Azure SQL Database automatically apply corrective actions - this is known as
automatic tuning mode .
Automatic tuning can be enabled at the server or the database level through:
The Azure portal
REST API calls
T-SQL commands

NOTE
For Azure SQL Managed Instance, the supported option FORCE_LAST_GOOD_PLAN can only be configured through T-
SQL. The Azure portal based configuration and automatic index tuning options described in this article do not apply to
Azure SQL Managed Instance.

NOTE
Configuring automatic tuning options through the ARM (Azure Resource Manager) template is not supported at this
time.

Enable automatic tuning on server


On the server level you can choose to inherit automatic tuning configuration from "Azure Defaults" or not to
inherit the configuration. Azure defaults are FORCE_LAST_GOOD_PLAN enabled, CREATE_INDEX disabled, and
DROP_INDEX disabled.
Azure portal
To enable automatic tuning on a server in Azure SQL Database, navigate to the server in the Azure portal and
then select Automatic tuning in the menu.
Select the automatic tuning options you want to enable and select Apply .
Automatic tuning options on a server are applied to all databases on this server. By default, all databases inherit
configuration from their parent server, but this can be overridden and specified for each database individually.
REST API
To find out more about using a REST API to enable automatic tuning on a ser ver , see Server automatic tuning
UPDATE and GET HTTP methods.

Enable automatic tuning on an individual database


Azure SQL Database enables you to individually specify the automatic tuning configuration for each database.
On the database level you can choose to inherit automatic tuning configuration from the parent server, "Azure
Defaults" or not to inherit the configuration. Azure Defaults are set to FORCE_LAST_GOOD_PLAN is enabled,
CREATE_INDEX is disabled, and DROP_INDEX is disabled.

TIP
The general recommendation is to manage the automatic tuning configuration at ser ver level so the same configuration
settings can be applied on every database automatically. Configure automatic tuning on an individual database only if you
need that database to have different settings than others inheriting settings from the same server.

Azure portal
To enable automatic tuning on a single database , navigate to the database in the Azure portal and select
Automatic tuning .
Individual automatic tuning settings can be separately configured for each database. You can manually configure
an individual automatic tuning option, or specify that an option inherits its settings from the server.
Once you have selected your desired configuration, click Apply .
REST API
To find out more about using a REST API to enable automatic tuning on a single database, see Azure SQL
Database automatic tuning UPDATE and GET HTTP methods.
T -SQL
To enable automatic tuning on a single database via T-SQL, connect to the database and execute the following
query:

ALTER DATABASE current SET AUTOMATIC_TUNING = AUTO | INHERIT | CUSTOM

Setting automatic tuning to AUTO will apply Azure Defaults. Setting it to INHERIT, automatic tuning
configuration will be inherited from the parent server. Choosing CUSTOM, you will need to manually configure
automatic tuning.
To configure individual automatic tuning options via T-SQL, connect to the database and execute the query such
as this one:

ALTER DATABASE current SET AUTOMATIC_TUNING (FORCE_LAST_GOOD_PLAN = ON, CREATE_INDEX = ON, DROP_INDEX = OFF)

Setting the individual tuning option to ON will override any setting that database inherited and enable the
tuning option. Setting it to OFF will also override any setting that database inherited and disable the tuning
option. Automatic tuning option for which DEFAULT is specified, will inherit the automatic tuning configuration
from the server level settings.

IMPORTANT
In the case of active geo-replication, Automatic tuning needs to be configured on the primary database only.
Automatically applied tuning actions, such as for example index create or delete will be automatically replicated to geo-
secondaries. Attempting to enable Automatic tuning via T-SQL on the read-only secondary will result in a failure as
having a different tuning configuration on the read-only secondary is not supported.
To find out more abut T-SQL options to configure automatic tuning, see ALTER DATABASE SET Options (Transact-
SQL).

Troubleshooting
Automated recommendation management is disabled
In case of error messages that automated recommendation management has been disabled, or simply disabled
by system, the most common causes are:
Query Store is not enabled, or
Query Store is in read-only mode for a specified database, or
Query Store stopped running because it ran out of allocated storage space.
The following steps can be considered to rectify this issue:
Clean up the Query Store, or modify the data retention period to "auto" by using T-SQL, or increase Query
Store maximum size. See how to configure recommended retention and capture policy for Query Store.
Use SQL Server Management Studio (SSMS) and follow these steps:
Connect to the Azure SQL Database
Right click on the database
Go to Properties and click on Query Store
Change the Operation Mode to Read-Write
Change the Store Capture Mode to Auto
Change the Size Based Cleanup Mode to Auto
Permissions
For Azure SQL Database, managing Automatic tuning in Azure portal, or using PowerShell or REST API requires
membership in Azure built-in RBAC roles.
To manage automatic tuning, the minimum required permission to grant to the user is membership in the SQL
Database contributor role. You can also consider using higher privilege roles such as SQL Server Contributor,
Contributor, and Owner.
For permissions required to manage Automatic tuning with T-SQL, see Permissions for ALTER DATABASE .

Configure automatic tuning e-mail notifications


To receive automated email notifications on recommendations made by the automatic tuning, see the automatic
tuning e-mail notifications guide.

Next steps
Read the Automatic tuning article to learn more about automatic tuning and how it can help you improve
your performance.
See Performance recommendations for an overview of Azure SQL Database performance recommendations.
See Query Performance Insights to learn about viewing the performance impact of your top queries.
Email notifications for automatic tuning
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database tuning recommendations are generated by Azure SQL Database automatic tuning. This
solution continuously monitors and analyzes workloads of databases providing customized tuning
recommendations for each individual database related to index creation, index deletion, and optimization of
query execution plans.
Azure SQL Database automatic tuning recommendations can be viewed in the Azure portal, retrieved with REST
API calls, or by using T-SQL and PowerShell commands. This article is based on using a PowerShell script to
retrieve automatic tuning recommendations.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical.

Automate email notifications for automatic tuning recommendations


The following solution automates the sending of email notifications containing automatic tuning
recommendations. The solution described consists of automating execution of a PowerShell script for retrieving
tuning recommendations using Azure Automation, and automation of scheduling email delivery job using
Microsoft Power Automate.

Create Azure Automation account


To use Azure Automation, the first step is to create an automation account and to configure it with Azure
resources to use for execution of the PowerShell script. To learn more about Azure Automation and its
capabilities, see Getting started with Azure automation.
Follow these steps to create an Azure Automation Account through the method of selecting and configuring an
Automation app from Azure Marketplace:
1. Log into the Azure portal.
2. Click on "+ Create a resource " in the upper left corner.
3. Search for "Automation " (press enter).
4. Click on the Automation app in the search results.
5. Once inside the "Create an Automation Account" pane, click on "Create ".
6. Populate the required information: enter a name for this automation account, select your Azure
subscription ID and Azure resources to be used for the PowerShell script execution.
7. For the "Create Azure Run As account " option, select Yes to configure the type of account under
which PowerShell script runs with the help of Azure Automation. To learn more about account types, see
Run As account.
8. Conclude creation of the automation account by clicking on Create .

TIP
Record your Azure Automation account name, subscription ID, and resources (such as copy-paste to a notepad) exactly as
entered while creating the Automation app. You need this information later.

If you have several Azure subscriptions for which you would like to build the same automation, you need to
repeat this process for your other subscriptions.

Update Azure Automation modules


The PowerShell script to retrieve automatic tuning recommendation uses Get-AzResource and Get-
AzSqlDatabaseRecommendedAction commands for which Azure Module version 4 and above is required.
In case your Azure Modules need updating, see Az module support in Azure Automation.

Create Azure Automation runbook


The next step is to create a Runbook in Azure Automation inside which the PowerShell script for retrieval of
tuning recommendations resides.
Follow these steps to create a new Azure Automation runbook:
1. Access the Azure Automation account you created in the previous step.
2. Once in the automation account pane, click on the "Runbooks " menu item on the left-hand side to create
a new Azure Automation runbook with the PowerShell script. To learn more about creating automation
runbooks, see Create a new runbook.
3. To add a new runbook, click on the "+Add a runbook " menu option, and then click on the "Quick
create – Create a new runbook "..
4. In the Runbook pane, type in the name of your runbook (for the purpose of this example,
"AutomaticTuningEmailAutomation " is used), select the type of runbook as PowerShell and write a
description of this runbook to describe its purpose.
5. Click on the Create button to finish creating a new runbook.
Follow these steps to load a PowerShell script inside the runbook created:
1. Inside the "Edit PowerShell Runbook " pane, select "RUNBOOKS " on the menu tree and expand the view
until you see the name of your runbook (in this example "AutomaticTuningEmailAutomation "). Select this
runbook.
2. On the first line of the "Edit PowerShell Runbook" (starting with the number 1), copy-paste the following
PowerShell script code. This PowerShell script is provided as-is to get you started. Modify the script to suite
your needs.
In the header of the provided PowerShell script, you need to replace <SUBSCRIPTION_ID_WITH_DATABASES> with
your Azure subscription ID. To learn how to retrieve your Azure subscription ID, see Getting your Azure
Subscription GUID.
In the case of several subscriptions, you can add them as comma-delimited to the "$subscriptions" property in
the header of the script.

# PowerShell script to retrieve Azure SQL Database automatic tuning recommendations.


#
# Provided "as-is" with no implied warranties or support.
# The script is released to the public domain.
#
# Replace <SUBSCRIPTION_ID_WITH_DATABASES> in the header with your Azure subscription ID.
#
# Microsoft Azure SQL Database team, 2018-01-22.

# Set subscriptions : IMPORTANT – REPLACE <SUBSCRIPTION_ID_WITH_DATABASES> WITH YOUR SUBSCRIPTION ID


$subscriptions = ("<SUBSCRIPTION_ID_WITH_DATABASES>", "<SECOND_SUBSCRIPTION_ID_WITH_DATABASES>", "
<THIRD_SUBSCRIPTION_ID_WITH_DATABASES>")

# Get credentials
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -
CertificateThumbprint $Conn.CertificateThumbprint

# Define the resource types


$resourceTypes = ("Microsoft.Sql/servers/databases")
$advisors = ("CreateIndex", "DropIndex");
$results = @()

# Loop through all subscriptions


foreach($subscriptionId in $subscriptions) {
Select-AzSubscription -SubscriptionId $subscriptionId
$rgs = Get-AzResourceGroup
# Loop through all resource groups
foreach($rg in $rgs) {
$rgname = $rg.ResourceGroupName;

# Loop through all resource types


foreach($resourceType in $resourceTypes) {
$resources = Get-AzResource -ResourceGroupName $rgname -ResourceType $resourceType

# Loop through all databases


# Extract resource groups, servers and databases
foreach ($resource in $resources) {
$resourceId = $resource.ResourceId
if ($resourceId -match ".*RESOURCEGROUPS/(?<content>.*)/PROVIDERS.*") {
$ResourceGroupName = $matches['content']
} else {
continue
}
if ($resourceId -match ".*SERVERS/(?<content>.*)/DATABASES.*") {
$ServerName = $matches['content']
} else {
continue
}
if ($resourceId -match ".*/DATABASES/(?<content>.*)") {
$DatabaseName = $matches['content']
} else {
continue
}

# Skip if master
if ($DatabaseName -eq "master") {
continue
}

# Loop through all automatic tuning recommendation types


foreach ($advisor in $advisors) {
$recs = Get-AzSqlDatabaseRecommendedAction -ResourceGroupName $ResourceGroupName -
ServerName $ServerName -DatabaseName $DatabaseName -AdvisorName $advisor
foreach ($r in $recs) {
if ($r.State.CurrentValue -eq "Active") {
$object = New-Object -TypeName PSObject
$object | Add-Member -Name 'SubscriptionId' -MemberType Noteproperty -Value
$subscriptionId
$object | Add-Member -Name 'ResourceGroupName' -MemberType Noteproperty -Value
$r.ResourceGroupName
$object | Add-Member -Name 'ServerName' -MemberType Noteproperty -Value
$r.ServerName
$object | Add-Member -Name 'DatabaseName' -MemberType Noteproperty -Value
$r.DatabaseName
$object | Add-Member -Name 'Script' -MemberType Noteproperty -Value
$r.ImplementationDetails.Script
$results += $object
}
}
}
}
}
}
}

# Format and output results for the email


$table = $results | Format-List
Write-Output $table

Click the "Save " button in the upper right corner to save the script. When you are satisfied with the script, click
the "Publish " button to publish this runbook.
At the main runbook pane, you can choose to click on the "Star t " button to test the script. Click on the
"Output " to view results of the script executed. This output is going to be the content of your email. The sample
output from the script can be seen in the following screenshot.

Ensure to adjust the content by customizing the PowerShell script to your needs.
With the above steps, the PowerShell script to retrieve automatic tuning recommendations is loaded in Azure
Automation. The next step is to automate and schedule the email delivery job.

Automate the email jobs with Microsoft Power Automate


To complete the solution, as the final step, create an automation flow in Microsoft Power Automate consisting of
three actions ( jobs):
"Azure Automation - Create job " – used to execute the PowerShell script to retrieve automatic tuning
recommendations inside the Azure Automation runbook.
"Azure Automation - Get job output " – used to retrieve output from the executed PowerShell script.
"Office 365 Outlook – Send an email " – used to send out email. E-mails are sent out using the work or
school account of the individual creating the flow.
To learn more about Microsoft Power Automate capabilities, see Getting started with Microsoft Power Automate.
Prerequisite for this step is to sign up for a Microsoft Power Automate account and to log in. Once inside the
solution, follow these steps to set up a new flow :
1. Access "My flows " menu item.
2. Inside My flows, select the "+Create from blank " link at the top of the page.
3. Click on the link "Search for hundreds of connectors and triggers " at the bottom of the page.
4. In the search field type "recurrence ", and select "Schedule - Recurrence " from the search results to
schedule the email delivery job to run.
5. In the Recurrence pane in the Frequency field, select the scheduling frequency for this flow to execute, such
as send automated email each Minute, Hour, Day, Week, etc.
The next step is to add three jobs (create, get output and send email) to the newly created recurring flow. To
accomplish adding the required jobs to the flow, follow these steps:
1. Create action to execute PowerShell script to retrieve tuning recommendations
Select "+New step ", followed by "Add an action " inside the Recurrence flow pane.
In the search field type "automation " and select "Azure Automation – Create job " from the search
results.
In the Create job pane, configure the job properties. For this configuration, you will need details of
your Azure subscription ID, Resource Group and Automation Account previously recorded at the
Automation Account pane . To learn more about options available in this section, see Azure
Automation - Create Job.
Complete creating this action by clicking on "Save flow ".
2. Create an action to retrieve output from the executed PowerShell script
Select "+New step ", followed by "Add an action " inside the Recurrence flow pane
In the search field type "automation " and select "Azure Automation – Get job output " from the
search results. To learn more about options available in this section, see Azure Automation – Get job
output.
Populate fields required (similar to creating the previous job) - populate your Azure subscription ID,
Resource Group, and Automation Account (as entered in the Automation Account pane).
Click inside the field "Job ID " for the "Dynamic content " menu to show up. From within this menu,
select the option "Job ID ".
Complete creating this action by clicking on "Save flow ".
3. Create an action to send out email using Office 365 integration
Select "+New step ", followed by "Add an action " inside the Recurrence flow pane.
In the search field type "send an email " and select "Office 365 Outlook – Send an email " from
the search results.
In the "To " field type in the email address to which you need to send the notification email.
In the "Subject " field type in the subject of your email, for example "Automatic tuning
recommendations email notification".
Click inside the field "Body " for the "Dynamic content " menu to show up. From within this menu,
under "Get job output ", select "Content ".
Complete creating this action by clicking on "Save flow ".

TIP
To send automated emails to different recipients, create separate flows. In these additional flows, change the recipient
email address in the "To" field, and the email subject line in the "Subject" field. Creating new runbooks in Azure
Automation with customized PowerShell scripts (such as with change of Azure subscription ID) enables further
customization of automated scenarios, such as for example emailing separate recipients on Automated tuning
recommendations for separate subscriptions.

The above concludes steps required to configure the email delivery job workflow. The entire flow consisting of
three actions built is shown in the following image.
To test the flow, click on "Run Now " in the upper right corner inside the flow pane.
Statistics of running the automated jobs, showing success of email notifications sent out, can be seen from the
Flow analytics pane.

The Flow analytics pane is helpful for monitoring the success of job executions, and if required for
troubleshooting. In the case of troubleshooting, you also might want to examine the PowerShell script execution
log accessible through the Azure Automation app.
The final output of the automated email looks similar to the following email received after building and running
this solution:
By adjusting the PowerShell script, you can adjust the output and formatting of the automated email to your
needs.
You might further customize the solution to build email notifications based on a specific tuning event, and to
multiple recipients, for multiple subscriptions or databases, depending on your custom scenarios.

Next steps
Learn more on how automatic tuning can help you improve database performance, see Automatic tuning in
Azure SQL Database.
To enable automatic tuning in Azure SQL Database to manage your workload, see Enable automatic tuning.
To manually review and apply automatic tuning recommendations, see Find and apply performance
recommendations.
Find and apply performance recommendations
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You can use the Azure portal to find performance recommendations that can optimize performance of your
database in Azure SQL Database or to correct some issue identified in your workload. The Performance
recommendation page in the Azure portal enables you to find the top recommendations based on their
potential impact.

Viewing recommendations
To view and apply performance recommendations, you need the correct Azure role-based access control (Azure
RBAC) permissions in Azure. Reader , SQL DB Contributor permissions are required to view
recommendations, and Owner , SQL DB Contributor permissions are required to execute any actions; create
or drop indexes and cancel index creation.
Use the following steps to find performance recommendations on the Azure portal:
1. Sign in to the Azure portal.
2. Go to All ser vices > SQL databases , and select your database.
3. Navigate to Performance recommendation to view available recommendations for the selected database.
Performance recommendations are shown in the table similar to the one shown on the following figure:

Recommendations are sorted by their potential impact on performance into the following categories:

IM PA C T DESC RIP T IO N

High High impact recommendations should provide the most


significant performance impact.

Medium Medium impact recommendations should improve


performance, but not substantially.

Low Low impact recommendations should provide better


performance than without, but improvements might not be
significant.
NOTE
Azure SQL Database needs to monitor activities at least for a day in order to identify some recommendations. The Azure
SQL Database can more easily optimize for consistent query patterns than it can for random spotty bursts of activity. If
recommendations are not currently available, the Performance recommendation page provides a message explaining
why.

You can also view the status of the historical operations. Select a recommendation or status to see more
information.
Here is an example of the "Create index" recommendation in the Azure portal.

Applying recommendations
Azure SQL Database gives you full control over how recommendations are enabled using any of the following
three options:
Apply individual recommendations one at a time.
Enable the Automatic tuning to automatically apply recommendations.
To implement a recommendation manually, run the recommended T-SQL script against your database.
Select any recommendation to view its details and then click View script to review the exact details of how the
recommendation is created.
The database remains online while the recommendation is applied -- using performance recommendation or
automatic tuning never takes a database offline.
Apply an individual recommendation
You can review and accept recommendations one at a time.
1. On the Recommendations page, select a recommendation.
2. On the Details page, click the Apply button.

Selected recommendations are applied on the database.


Removing recommendations from the list
If your list of recommendations contains items that you want to remove from the list, you can discard the
recommendation:
1. Select a recommendation in the list of Recommendations to open the details.
2. Click Discard on the Details page.
If desired, you can add discarded items back to the Recommendations list:
1. On the Recommendations page, click View discarded .
2. Select a discarded item from the list to view its details.
3. Optionally, click Undo Discard to add the index back to the main list of Recommendations .

NOTE
Please note that if SQL Database Automatic tuning is enabled, and if you have manually discarded a recommendation
from the list, such recommendation will never be applied automatically. Discarding a recommendation is a handy way for
users to have Automatic tuning enabled in cases when requiring that a specific recommendation shouldn’t be applied. You
can revert this behavior by adding discarded recommendations back to the Recommendations list by selecting the Undo
Discard option.

Enable automatic tuning


You can set your database to implement recommendations automatically. As recommendations become
available, they are automatically applied. As with all recommendations managed by the service, if the
performance impact is negative, the recommendation is reverted.
1. On the Recommendations page, click Automate :
2. Select actions to automate:

NOTE
Please note that DROP_INDEX option is currently not compatible with applications using partition switching and index
hints.

Once you have selected your desired configuration, click Apply.


Manually apply recommendations through T -SQL
Select any recommendation and then click View script . Run this script against your database to manually apply
the recommendation.
Indexes that are manually executed are not monitored and validated for performance impact by the service so it
is suggested that you monitor these indexes after creation to verify they provide performance gains and adjust
or delete them if necessary. For details about creating indexes, see CREATE INDEX (Transact-SQL). In addition,
manually applied recommendations will remain active and shown in the list of recommendations for 24-48 hrs.
before the system automatically withdraws them. If you would like to remove a recommendation sooner, you
can manually discard it.
Canceling recommendations
Recommendations that are in a Pending , Validating , or Success status can be canceled. Recommendations
with a status of Executing cannot be canceled.
1. Select a recommendation in the Tuning Histor y area to open the recommendations details page.
2. Click Cancel to abort the process of applying the recommendation.

Monitoring operations
Applying a recommendation might not happen instantaneously. The portal provides details regarding the status
of recommendation. The following are possible states that an index can be in:
STAT US DESC RIP T IO N

Pending Apply recommendation command has been received and is


scheduled for execution.

Executing The recommendation is being applied.

Validating Recommendation was successfully applied and the service is


measuring the benefits.

Success Recommendation was successfully applied and benefits have


been measured.

Error An error occurred during the process of applying the


recommendation. This can be a transient issue, or possibly a
schema change to the table and the script is no longer valid.

Reverting The recommendation was applied, but has been deemed


non-performant and is being automatically reverted.

Reverted The recommendation was reverted.

Click an in-process recommendation from the list to see more information:

Reverting a recommendation
If you used the performance recommendations to apply the recommendation (meaning you did not manually
run the T-SQL script), it automatically reverts the change if it finds the performance impact to be negative. If for
any reason you simply want to revert a recommendation, you can do the following:
1. Select a successfully applied recommendation in the Tuning histor y area.
2. Click Rever t on the recommendation details page.

Monitoring performance impact of index recommendations


After recommendations are successfully implemented (currently, index operations and parameterize queries
recommendations only), you can click Quer y Insights on the recommendation details page to open Query
Performance Insights and see the performance impact of your top queries.
Summary
Azure SQL Database provides recommendations for improving database performance. By providing T-SQL
scripts, you get assistance in optimizing your database and ultimately improving query performance.

Next steps
Monitor your recommendations and continue to apply them to refine performance. Database workloads are
dynamic and change continuously. Azure SQL Database continues to monitor and provide recommendations
that can potentially improve your database's performance.
See Automatic tuning to learn more about the automatic tuning in Azure SQL Database.
See Performance recommendations for an overview of Azure SQL Database performance recommendations.
See Query Performance Insights to learn about viewing the performance impact of your top queries.

Additional resources
Query Store
CREATE INDEX
Azure role-based access control (Azure RBAC)
Create alerts for Azure SQL Database and Azure
Synapse Analytics using the Azure portal
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure Synapse Analytics

Overview
This article shows you how to set up alerts for databases in Azure SQL Database and Azure Synapse Analytics
using the Azure portal. Alerts can send you an email or call a web hook when some metric (for example
database size or CPU usage) reaches the threshold.

NOTE
For Azure SQL Managed Instance specific instructions, see Create alerts for Azure SQL Managed Instance.

You can receive an alert based on monitoring metrics for, or events on, your Azure services.
Metric values - The alert triggers when the value of a specified metric crosses a threshold you assign in
either direction. That is, it triggers both when the condition is first met and then afterwards when that
condition is no longer being met.
Activity log events - An alert can trigger on every event, or, only when a certain number of events occur.
You can configure an alert to do the following when it triggers:
Send email notifications to the service administrator and co-administrators
Send email to additional emails that you specify.
Call a webhook
You can configure and get information about alert rules using
The Azure portal
PowerShell
A command-line interface (CLI)
Azure Monitor REST API

Create an alert rule on a metric with the Azure portal


1. In the portal, locate the resource you are interested in monitoring and select it.
2. Select Aler ts in the Monitoring section. The text and icon may vary slightly for different resources.
3. Select the New aler t rule button to open the Create rule page.
4. In the Condition section, click Add .
5. In the Configure signal logic page, select a signal.
6. After selecting a signal, such as CPU percentage , the Configure signal logic page appears.
7. On this page, configure that threshold type, operator, aggregation type, threshold value, aggregation
granularity, and frequency of evaluation. Then click Done .
8. On the Create rule , select an existing Action group or create a new group. An action group enables
you to define the action to be taken when an alert condition occurs.
9. Define a name for the rule, provide an optional description, choose a severity level for the rule, choose
whether to enable the rule upon rule creation, and then click Create rule aler t to create the metric rule
alert.
Within 10 minutes, the alert is active and triggers as previously described.

Next steps
Learn more about configuring webhooks in alerts.
Database Advisor performance recommendations
for Azure SQL Database
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database learns and adapts with your application. Azure SQL Database has a number of database
advisors that provide customized recommendations that enable you to maximize performance. These database
advisors continuously assess and analyze the usage history and provide recommendations based on workload
patterns that help improve performance.

Performance overview
Performance overview provides a summary of your database performance, and helps you with performance
tuning and troubleshooting.

The Recommendations tile provides a breakdown of tuning recommendations for your database (top three
recommendations are shown if there are more). Clicking this tile takes you to Performance
recommendation options .
The Tuning activity tile provides a summary of the ongoing and completed tuning actions for your
database, giving you a quick view into the history of tuning activity. Clicking this tile takes you to the full
tuning history view for your database.
The Auto-tuning tile shows the auto-tuning configuration for your database (tuning options that are
automatically applied to your database). Clicking this tile opens the automation configuration dialog.
The Database queries tile shows the summary of the query performance for your database (overall DTU
usage and top resource consuming queries). Clicking this tile takes you to Quer y Performance Insight .

Performance recommendation options


Performance recommendation options available in Azure SQL Database are:
SIN GL E DATA B A SE A N D P O O L ED
P ERF O RM A N C E REC O M M EN DAT IO N DATA B A SE SUP P O RT IN STA N C E DATA B A SE SUP P O RT

Create index recommendations - Yes No


Recommends creation of indexes that
may improve performance of your
workload.

Drop index recommendations - Yes No


Recommends removal of redundant
and duplicate indexes daily, except for
unique indexes, and indexes that were
not used for a long time (>90 days).
Please note that this option is not
compatible with applications using
partition switching and index hints.
Dropping unused indexes is not
supported for Premium and Business
Critical service tiers.

Parameterize queries Yes No


recommendations (preview) -
Recommends forced parameterization
in cases when you have one or more
queries that are constantly being
recompiled but end up with the same
query execution plan.

Fix schema issues Yes No


recommendations (preview) -
Recommendations for schema
correction appear when Azure SQL
Database notices an anomaly in the
number of schema-related SQL errors
that are happening on your database.
Microsoft is currently deprecating "Fix
schema issue" recommendations.

To apply performance recommendations, see applying recommendations. To view the status of


recommendations, see Monitoring operations.
You can also find complete history of tuning actions that were applied in the past.
Create index recommendations
Azure SQL Database continuously monitors the queries that are running and identifies the indexes that could
improve performance. After there's enough confidence that a certain index is missing, a new Create index
recommendation is created.
Azure SQL Database builds confidence by estimating the performance gain the index would bring through time.
Depending on the estimated performance gain, recommendations are categorized as high, medium, or low.
Indexes that are created by using recommendations are always flagged as auto-created indexes. You can see
which indexes are auto-created by looking at the sys.indexes view. Auto-created indexes don't block
ALTER/RENAME commands.
If you try to drop the column that has an auto-created index over it, the command passes. The auto-created
index is dropped with the command as well. Regular indexes block the ALTER/RENAME command on columns
that are indexed.
After the create index recommendation is applied, Azure SQL Database compares the performance of the
queries with the baseline performance. If the new index improved performance, the recommendation is flagged
as successful and the impact report is available. If the index didn't improve performance, it's automatically
reverted. Azure SQL Database uses this process to ensure that recommendations improve database
performance.
Any create index recommendation has a back-off policy that doesn't allow applying the recommendation if the
resource usage of a database or pool is high. The back-off policy takes into account CPU, Data IO, Log IO, and
available storage.
If CPU, data IO, or log IO is higher than 80% in the previous 30 minutes, the create index recommendation is
postponed. If the available storage will be below 10% after the index is created, the recommendation goes into
an error state. If, after a couple of days, automatic tuning still believes that the index would be beneficial, the
process starts again.
This process repeats until there's enough available storage to create an index, or until the index isn't seen as
beneficial anymore.

Drop index recommendations


Besides detecting missing indexes, Azure SQL Database continuously analyzes the performance of existing
indexes. If an index is not used, Azure SQL Database recommends dropping it. Dropping an index is
recommended in two cases:
The index is a duplicate of another index (same indexed and included column, partition schema, and filters).
The index hasn't been used for a prolonged period (93 days).
Drop index recommendations also go through the verification after implementation. If the performance
improves, the impact report is available. If performance degrades, the recommendation is reverted.

Parameterize queries recommendations (preview)


Parameterize queries recommendations appear when you have one or more queries that are constantly being
recompiled but end up with the same query execution plan. This condition creates an opportunity to apply
forced parameterization. Forced parameterization, in turn, allows query plans to be cached and reused in the
future, which improves performance and reduces resource usage.
Every query initially needs to be compiled to generate an execution plan. Each generated plan is added to the
plan cache. Subsequent executions of the same query can reuse this plan from the cache, which eliminates the
need for additional compilation.
Queries with non-parameterized values can lead to performance overhead because the execution plan is
recompiled each time the non-parameterized values are different. In many cases, the same queries with different
parameter values generate the same execution plans. These plans, however, are still separately added to the plan
cache.
The process of recompiling execution plans uses database resources, increases the query duration time, and
overflows the plan cache. These events, in turn, cause plans to be evicted from the cache. This behavior can be
altered by setting the forced parameterization option on the database.
To help you estimate the impact of this recommendation, you are provided with a comparison between the
actual CPU usage and the projected CPU usage (as if the recommendation were applied). This recommendation
can help you gain CPU savings. It can also help you decrease query duration and overhead for the plan cache,
which means that more of the plans can stay in the cache and be reused. You can apply this recommendation
quickly by selecting the Apply command.
After you apply this recommendation, it enables forced parameterization within minutes on your database. It
starts the monitoring process, which lasts for approximately 24 hours. After this period, you can see the
validation report. This report shows the CPU usage of your database 24 hours before and after the
recommendation has been applied. Azure SQL Database Advisor has a safety mechanism that automatically
reverts the applied recommendation if performance regression has been detected.

Fix schema issues recommendations (preview)


IMPORTANT
Microsoft is currently deprecating "Fix schema issue" recommendations. We recommend that you use Intelligent Insights
to monitor your database performance issues, including schema issues that the "Fix schema issue" recommendations
previously covered.

Fix schema issues recommendations appear when Azure SQL Database notices an anomaly in the number of
schema-related SQL errors that are happening on your database. This recommendation typically appears when
your database encounters multiple schema-related errors (invalid column name, invalid object name, and so on)
within an hour.
"Schema issues" are a class of syntax errors. They occur when the definition of the SQL query and the definition
of the database schema aren't aligned. For example, one of the columns that's expected by the query might be
missing in the target table or vice-versa.
The "Fix schema issue" recommendation appears when Azure SQL Database notices an anomaly in the number
of schema-related SQL errors that are happening on your database. The following table shows the errors that
are related to schema issues:

SQ L ERRO R C O DE M ESSA GE

201 Procedure or function '' expects parameter '', which was not
supplied.

207 Invalid column name '*'.

208 Invalid object name '*'.

213 Column name or number of supplied values does not match


table definition.
SQ L ERRO R C O DE M ESSA GE

2812 Could not find stored procedure '*'.

8144 Procedure or function * has too many arguments specified.

Custom applications
Developers might consider developing custom applications using performance recommendations for Azure SQL
Database. All recommendations listed in the portal for a database can be accessed through Get-
AzSqlDatabaseRecommendedAction API.

Next steps
For more information about automatic tuning of database indexes and query execution plans, see Azure SQL
Database automatic tuning.
For more information about automatically monitoring database performance with automated diagnostics
and root cause analysis of performance issues, see Azure SQL Intelligent Insights.
See Query Performance Insights to learn about and view the performance impact of your top queries.
Stream data into Azure SQL Database using Azure
Stream Analytics integration (preview)
7/12/2022 • 6 minutes to read • Edit Online

Users can now ingest, process, view, and analyze real-time streaming data into a table directly from a database
in Azure SQL Database. They do so in the Azure portal using Azure Stream Analytics. This experience enables a
wide variety of scenarios such as connected car, remote monitoring, fraud detection, and many more. In the
Azure portal, you can select an events source (Event Hub/IoT Hub), view incoming real-time events, and select a
table to store events. You can also write Azure Stream Analytics Query Language queries in the portal to
transform incoming events and store them in the selected table. This new entry point is in addition to the
creation and configuration experiences that already exist in Stream Analytics. This experience starts from the
context of your database, enabling you to quickly set up a Stream Analytics job and navigate seamlessly
between the database in Azure SQL Database and Stream Analytics experiences.

Key benefits
Minimum context switching: You can start from a database in Azure SQL Database in the portal and start
ingesting real-time data into a table without switching to any other service.
Reduced number of steps: The context of your database and table is used to pre-configure a Stream Analytics
job.
Additional ease of use with preview data: Preview incoming data from the events source (Event Hub/IoT Hub)
in the context of selected table

IMPORTANT
An Azure Stream Analytics job can output to Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse
Analytics. For more information, see Outputs.

Prerequisites
To complete the steps in this article, you need the following resources:
An Azure subscription. If you don't have an Azure subscription, create a free account.
A database in Azure SQL Database. For details, see Create a single database in Azure SQL Database.
A firewall rule allowing your computer to connect to the server. For details, see Create a server-level firewall
rule.

Configure Stream analytics integration


1. Sign in to the Azure portal.
2. Navigate to the database where you want to ingest your streaming data. Select Stream analytics
(preview) .

3. To start ingesting your streaming data into this database, select Create and give a name to your
streaming job, and then select Next: Input .

4. Enter your events source details, and then select Next: Output .
Input type : Event Hub/IoT Hub
Input alias : Enter a name to identify your events source
Subscription : Same as Azure SQL Database subscription
Event Hub namespace : Name for namespace
Event Hub name : Name of event hub within selected namespace
Event Hub policy name (Default to create new): Give a policy name
Event Hub consumer group (Default to create new): Give a consumer group name
We recommend that you create a consumer group and a policy for each new Azure Stream
Analytics job that you create from here. Consumer groups allow only five concurrent readers, so
providing a dedicated consumer group for each job will avoid any errors that might arise from
exceeding that limit. A dedicated policy allows you to rotate your key or revoke permissions
without impacting other resources.

5. Select which table you want to ingest your streaming data into. Once done, select Create .
Username , Password : Enter your credentials for SQL server authentication. Select Validate .
Table : Select Create new or Use existing . In this flow, let’s select Create . This will create a new
table when you start the stream Analytics job.

6. A query page opens with following details:


Your Input (input events source) from which you'll ingest data
Your Output (output table) which will store transformed data
Sample SAQL query with SELECT statement.
Input preview : Shows snapshot of latest incoming data from input events source.
The serialization type in your data is automatically detected (JSON/CSV). You can manually
change it as well to JSON/CSV/AVRO.
You can preview incoming data in the Table format or Raw format.
If your data shown isn't current, select Refresh to see the latest events.
Select Select time range to test your query against a specific time range of incoming events.
Select Upload sample input to test your query by uploading a sample JSON/CSV file. For
more information about testing a SAQL query, see Test an Azure Stream Analytics job with
sample data.

Test results : Select Test quer y and you can see the results of your streaming query

Test results schema : Shows the schema of the results of your streaming query after testing.
Make sure the test results schema matches with your output schema.

Output schema : This contains schema of the table you selected in step 5 (new or existing).
Create new: If you selected this option in step 5, you won’t see the schema yet until you start
the streaming job. When creating a new table, select the appropriate table index. For more
information about table indexing, see Clustered and Nonclustered Indexes Described.
Use existing: If you selected this option in step 5, you'll see the schema of selected table.
7. After you're done authoring & testing the query, select Save quer y . Select Star t Stream Analytics job
to start ingesting transformed data into the SQL table. Once you finalize the following fields, star t the
job.
Output star t time : This defines the time of the first output of the job.
Now: The job will start now and process new incoming data.
Custom: The job will start now but will process data from a specific point in time (that can be in
the past or the future). For more information, see How to start an Azure Stream Analytics job.
Streaming units : Azure Stream Analytics is priced by the number of streaming units required to
process the data into the service. For more information, see Azure Stream Analytics pricing.
Output data error handling :
Retry: When an error occurs, Azure Stream Analytics retries writing the event indefinitely until
the write succeeds. There's no timeout for retries. Eventually all subsequent events are blocked
from processing by the event that is retrying. This option is the default output error handling
policy.
Drop: Azure Stream Analytics will drop any output event that results in a data conversion error.
The dropped events can't be recovered for reprocessing later. All transient errors (for example,
network errors) are retried regardless of the output error handling policy configuration.
SQL Database output settings : An option for inheriting the partitioning scheme of your
previous query step, to enable fully parallel topology with multiple writers to the table. For more
information, see Azure Stream Analytics output to Azure SQL Database.
Max batch count : The recommended upper limit on the number of records sent with every bulk
insert transaction.
For more information about output error handling, see Output error policies in Azure Stream
Analytics.

8. Once you start the job, you'll see the Running job in the list, and you can take following actions:
Star t/stop the job : If the job is running, you can stop the job. If the job is stopped, you can start
the job.
Edit job : You can edit the query. If you want to do more changes to the job ex, add more
inputs/outputs, then open the job in Stream Analytics. Edit option is disabled when the job is
running.
Preview output table : You can preview the table in the SQL query editor.
Open in Stream Analytics : Open the job in Stream Analytics to view monitoring, debugging
details of the job.

Next steps
Azure Stream Analytics documentation
Azure Stream Analytics solution patterns
Diagnose and troubleshoot high CPU on Azure SQL
Database
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database provides built-in tools to identify the causes of high CPU usage and to optimize workload
performance. You can use these tools to troubleshoot high CPU usage while it's occurring, or reactively after the
incident has completed. You can also enable automatic tuning to proactively reduce CPU usage over time for
your database. This article teaches you to diagnose and troubleshoot high CPU with built-in tools in Azure SQL
Database and explains when to add CPU resources.

Understand vCore count


It's helpful to understand the number of virtual cores (vCores) available to your database when diagnosing a
high CPU incident. A vCore is equivalent to a logical CPU. The number of vCores helps you understand the CPU
resources available to your database.
Identify vCore count in the Azure portal
You can quickly identify the vCore count for a database in the Azure portal if you're using a vCore-based service
tier with the provisioned compute tier. In this case, the pricing tier listed for the database on its Over view
page will contain the vCore count. For example, a database's pricing tier might be 'General Purpose: Gen5, 16
vCores'.
For databases in the serverless compute tier, vCore count will always be equivalent to the max vCore setting for
the database. VCore count will show in the pricing tier listed for the database on its Over view page. For
example, a database's pricing tier might be 'General Purpose: Serverless, Gen5, 16 vCores'.
If you're using a database under the DTU-based purchasing model, you will need to use Transact-SQL to query
the database's vCore count.
Identify vCore count with Transact-SQL
You can identify the current vCore count for any database with Transact-SQL. You can run Transact-SQL against
Azure SQL Database with SQL Server Management Studio (SSMS), Azure Data Studio, or the Azure portal's
query editor (preview).
Connect to your database and run the following query:

SELECT
COUNT(*) as vCores
FROM sys.dm_os_schedulers
WHERE status = N'VISIBLE ONLINE';
GO

NOTE
For databases using Gen4 hardware, the number of visible online schedulers in sys.dm_os_schedulers may be double
the number of vCores specified at database creation and shown in Azure portal.
Identify the causes of high CPU
You can measure and analyze CPU utilization using the Azure portal, Query Store interactive tools in SSMS, and
Transact-SQL queries in SSMS and Azure Data Studio.
The Azure portal and Query Store show execution statistics, such as CPU metrics, for completed queries. If you
are experiencing a current high CPU incident that may be caused by one or more ongoing long-running queries,
identify currently running queries with Transact-SQL.
Common causes of new and unusual high CPU utilization are:
New queries in the workload that use a large amount of CPU.
An increase in the frequency of regularly running queries.
Query plan regression, including regression due to parameter sensitive plan (PSP) problems, resulting in one
or more queries consuming more CPU.
A significant increase in compilation or recompilation of query plans.
Databases where queries use excessive parallelism.
To understand what is causing your high CPU incident, identify when high CPU utilization is occurring against
your database and the top queries using CPU at that time.
Examine:
Are new queries using significant CPU appearing in the workload, or are you seeing an increase in frequency
of regularly running queries? Use any of the following methods to investigate. Look for queries with limited
history (new queries), and at the frequency of execution for queries with longer history.
Review CPU metrics and related top queries in the Azure portal
Query the top recent 15 queries by CPU usage with Transact-SQL.
Use interactive Query Store tools in SSMS to identify top queries by CPU time
Are some queries in the workload using more CPU per execution than they did in the past? If so, has the
query execution plan changed? These queries may have parameter sensitive plan (PSP) problems. Use either
of the following techniques to investigate. Look for queries with multiple query execution plans with
significant variation in CPU usage:
Query the top recent 15 queries by CPU usage with Transact-SQL.
Use interactive Query Store tools in SSMS to identify top queries by CPU time
Is there evidence of a large amount of compilation or recompilation occurring? Query the most frequently
compiled queries by query hash and review how frequently they compile.
Are queries using excessive parallelism? Query your MAXDOP database scoped configuration and review
your vCore count. Excessive parallelism often occurs in databases where MAXDOP is set to 0 with a core
count higher than eight.

NOTE
Azure SQL Database requires compute resources to implement core service features such as high availability and disaster
recovery, database backup and restore, monitoring, Query Store, automatic tuning, etc. Use of these compute resources
may be particularly noticeable on databases with low vCore counts or databases in dense elastic pools. Learn more in
Resource management in Azure SQL Database.

Review CPU usage metrics and related top queries in the Azure portal
Use the Azure portal to track various CPU metrics, including the percentage of available CPU used by your
database over time. The Azure portal combines CPU metrics with information from your database's Query Store,
which allows you to identify which queries consumed CPU in your database at a given time.
Follow these steps to find CPU percentage metrics.
1. Navigate to the database in the Azure portal.
2. Under Intelligent Performance in the left menu, select Quer y Performance Insight .
The default view of Query Performance Insight shows 24 hours of data. CPU usage is shown as a percentage of
total available CPU used for the database.
The top five queries running in that period are displayed in vertical bars above the CPU usage graph. Select a
band of time on the chart or use the Customize menu to explore specific time periods. You may also increase
the number of queries shown.

Select each query ID exhibiting high CPU to open details for the query. Details include query text along with
performance history for the query. Examine if CPU has increased for the query recently.
Take note of the query ID to further investigate the query plan using Query Store in the following section.
Review query plans for top queries identified in the Azure portal
Follow these steps to use a query ID in SSMS's interactive Query Store tools to examine the query's execution
plan over time.
1. Open SSMS.
2. Connect to your Azure SQL Database in Object Explorer.
3. Expand the database node in Object Explorer
4. Expand the Quer y Store folder.
5. Open the Tracked Queries pane.
6. Enter the query ID in the Tracking quer y box at the top left of the screen and press enter.
7. If necessary, select Configure to adjust the time interval to match the time when high CPU utilization was
occurring.
The page will show the execution plan(s) and related metrics for the query over the most recent 24 hours.
Identify currently running queries with Transact-SQL
Transact-SQL allows you to identify currently running queries with CPU time they have used so far. You can also
use Transact-SQL to query recent CPU usage in your database, top queries by CPU, and queries that compiled
the most often.
You can query CPU metrics with SQL Server Management Studio (SSMS), Azure Data Studio, or the Azure
portal's query editor (preview). When using SSMS or Azure Data Studio, open a new query window and connect
it to your database (not the master database).
Find currently running queries with CPU usage and execution plans by executing the following query. CPU time
is returned in milliseconds.

SELECT
req.session_id,
req.status,
req.start_time,
req.cpu_time AS 'cpu_time_ms',
req.logical_reads,
req.dop,
s.login_name,
s.host_name,
s.program_name,
object_name(st.objectid,st.dbid) 'ObjectName',
REPLACE (REPLACE (SUBSTRING (st.text,(req.statement_start_offset/2) + 1,
((CASE req.statement_end_offset WHEN -1 THEN DATALENGTH(st.text)
ELSE req.statement_end_offset END - req.statement_start_offset)/2) + 1),
CHAR(10), ' '), CHAR(13), ' ') AS statement_text,
qp.query_plan,
qsx.query_plan as query_plan_with_in_flight_statistics
FROM sys.dm_exec_requests as req
JOIN sys.dm_exec_sessions as s on req.session_id=s.session_id
CROSS APPLY sys.dm_exec_sql_text(req.sql_handle) as st
OUTER APPLY sys.dm_exec_query_plan(req.plan_handle) as qp
OUTER APPLY sys.dm_exec_query_statistics_xml(req.session_id) as qsx
ORDER BY req.cpu_time desc;
GO

This query returns two copies of the execution plan. The column query_plan contains the execution plan from
sys.dm_exec_query_plan(). This version of the query plan contains only estimates of row counts and does not
contain any execution statistics.
If the column query_plan_with_in_flight_statistics returns an execution plan, this plan provides more
information. The query_plan_with_in_flight_statistics column returns data from
sys.dm_exec_query_statistics_xml(), which includes "in flight" execution statistics such as the actual number of
rows returned so far by a currently running query.
Review CPU usage metrics for the last hour
The following query against sys.dm_db_resource_stats returns the average CPU usage over 15-second intervals
for approximately the last hour.

SELECT
end_time,
avg_cpu_percent,
avg_instance_cpu_percent
FROM sys.dm_db_resource_stats
ORDER BY end_time DESC;
GO

It is important to not focus only on the avg_cpu_percent column. The avg_instance_cpu_percent column
includes CPU used by both user and internal workloads. If avg_instance_cpu_percent is close to 100%, CPU
resources are saturated. In this case, you should troubleshoot high CPU if app throughput is insufficient or
query latency is high.
Learn more in Resource management in Azure SQL Database.
Review the examples in sys.dm_db_resource_stats for more queries.
Query the top recent 15 queries by CPU usage
Query Store tracks execution statistics, including CPU usage, for queries. The following query returns the top 15
queries that have run in the last 2 hours, sorted by CPU usage. CPU time is returned in milliseconds.

WITH AggregatedCPU AS
(SELECT
q.query_hash,
SUM(count_executions * avg_cpu_time / 1000.0) AS total_cpu_ms,
SUM(count_executions * avg_cpu_time / 1000.0)/ SUM(count_executions) AS avg_cpu_ms,
MAX(rs.max_cpu_time / 1000.00) AS max_cpu_ms,
MAX(max_logical_io_reads) max_logical_reads,
COUNT(DISTINCT p.plan_id) AS number_of_distinct_plans,
COUNT(DISTINCT p.query_id) AS number_of_distinct_query_ids,
SUM(CASE WHEN rs.execution_type_desc='Aborted' THEN count_executions ELSE 0 END) AS
aborted_execution_count,
SUM(CASE WHEN rs.execution_type_desc='Regular' THEN count_executions ELSE 0 END) AS
regular_execution_count,
SUM(CASE WHEN rs.execution_type_desc='Exception' THEN count_executions ELSE 0 END) AS
exception_execution_count,
SUM(count_executions) AS total_executions,
MIN(qt.query_sql_text) AS sampled_query_text
FROM sys.query_store_query_text AS qt
JOIN sys.query_store_query AS q ON qt.query_text_id=q.query_text_id
JOIN sys.query_store_plan AS p ON q.query_id=p.query_id
JOIN sys.query_store_runtime_stats AS rs ON rs.plan_id=p.plan_id
JOIN sys.query_store_runtime_stats_interval AS rsi ON
rsi.runtime_stats_interval_id=rs.runtime_stats_interval_id
WHERE
rs.execution_type_desc IN ('Regular', 'Aborted', 'Exception') AND
rsi.start_time>=DATEADD(HOUR, -2, GETUTCDATE())
GROUP BY q.query_hash),
OrderedCPU AS
(SELECT *,
ROW_NUMBER() OVER (ORDER BY total_cpu_ms DESC, query_hash ASC) AS RN
FROM AggregatedCPU)
SELECT *
FROM OrderedCPU AS OD
WHERE OD.RN<=15
ORDER BY total_cpu_ms DESC;
GO

This query groups by a hashed value of the query. If you find a high value in the number_of_distinct_query_ids
column, investigate if a frequently run query isn't properly parameterized. Non-parameterized queries may be
compiled on each execution, which consumes significant CPU and affect the performance of Query Store.
To learn more about an individual query, note the query hash and use it to Identify the CPU usage and query
plan for a given query hash.
Query the most frequently compiled queries by query hash
Compiling a query plan is a CPU-intensive process. Azure SQL Database cache plans in memory for reuse. Some
queries may be frequently compiled if they are not parameterized or if RECOMPILE hints force recompilation.
Query Store tracks the number of times queries are compiled. Run the following query to identify the top 20
queries in Query Store by compilation count, along with the average number of compilations per minute:
SELECT TOP (20)
query_hash,
MIN(initial_compile_start_time) as initial_compile_start_time,
MAX(last_compile_start_time) as last_compile_start_time,
CASE WHEN DATEDIFF(mi,MIN(initial_compile_start_time), MAX(last_compile_start_time)) > 0
THEN 1.* SUM(count_compiles) / DATEDIFF(mi,MIN(initial_compile_start_time),
MAX(last_compile_start_time))
ELSE 0
END as avg_compiles_minute,
SUM(count_compiles) as count_compiles
FROM sys.query_store_query AS q
GROUP BY query_hash
ORDER BY count_compiles DESC;
GO

To learn more about an individual query, note the query hash and use it to Identify the CPU usage and query
plan for a given query hash.
Identify the CPU usage and query plan for a given query hash
Run the following query to find the individual query ID, query text, and query execution plans for a given
query_hash . CPU time is returned in milliseconds.

Replace the value for the @query_hash variable with a valid query_hash for your workload.

declare @query_hash binary(8);

SET @query_hash = 0x6557BE7936AA2E91;

with query_ids as (
SELECT
q.query_hash,
q.query_id,
p.query_plan_hash,
SUM(qrs.count_executions) * AVG(qrs.avg_cpu_time)/1000. as total_cpu_time_ms,
SUM(qrs.count_executions) AS sum_executions,
AVG(qrs.avg_cpu_time)/1000. AS avg_cpu_time_ms
FROM sys.query_store_query q
JOIN sys.query_store_plan p on q.query_id=p.query_id
JOIN sys.query_store_runtime_stats qrs on p.plan_id = qrs.plan_id
WHERE q.query_hash = @query_hash
GROUP BY q.query_id, q.query_hash, p.query_plan_hash)
SELECT qid.*,
qt.query_sql_text,
p.count_compiles,
TRY_CAST(p.query_plan as XML) as query_plan
FROM query_ids as qid
JOIN sys.query_store_query AS q ON qid.query_id=q.query_id
JOIN sys.query_store_query_text AS qt on q.query_text_id = qt.query_text_id
JOIN sys.query_store_plan AS p ON qid.query_id=p.query_id and qid.query_plan_hash=p.query_plan_hash
ORDER BY total_cpu_time_ms DESC;
GO

This query returns one row for each variation of an execution plan for the query_hash across the entire history
of your Query Store. The results are sorted by total CPU time.
Use interactive Query Store tools to track historic CPU utilization
If you prefer to use graphic tools, follow these steps to use the interactive Query Store tools in SSMS.
1. Open SSMS and connect to your database in Object Explorer.
2. Expand the database node in Object Explorer
3. Expand the Quer y Store folder.
4. Open the Overall Resource Consumption pane.
Total CPU time for your database over the last month in milliseconds is shown in the bottom-left portion of the
pane. In the default view, CPU time is aggregated by day.

Select Configure in the top right of the pane to select a different time period. You can also change the unit of
aggregation. For example, you can choose to see data for a specific date range and aggregate the data by hour.
Use interactive Query Store tools to identify top queries by CPU time
Select a bar in the chart to drill in and see queries running in a specific time period. The Top Resource
Consuming Queries pane will open. Alternately, you can open Top Resource Consuming Queries from the
Query Store node under your database in Object Explorer directly.

In the default view, the Top Resource Consuming Queries pane shows queries by Duration (ms) . Duration
may sometimes be lower than CPU time: queries using parallelism may use much more CPU time than their
overall duration. Duration may also be higher than CPU time if waits were significant. To see queries by CPU
time, select the Metric drop-down at the top left of the pane and select CPU Time(ms) .
Each bar in the top-left quadrant represents a query. Select a bar to see details for that query. The top-right
quadrant of the screen shows how many execution plans are in Query Store for that query and maps them
according to when they were executed and how much of your selected metric was used. Select each Plan ID to
control which query execution plan is displayed in the bottom half of the screen.

NOTE
For a guide to interpreting Query Store views and the shapes which appear in the Top Resource Consumers view, see Best
practices with Query Store

Reduce CPU usage


Part of your troubleshooting should include learning more about the queries identified in the previous section.
You can reduce CPU usage by tuning indexes, modifying your application patterns, tuning queries, and adjusting
CPU-related settings for your database.
If you found new queries using significant CPU appearing in the workload, validate that indexes have been
optimized for those queries. You can tune indexes manually or reduce CPU usage with automatic index
tuning. Evaluate if your max degree of parallelism setting is correct for your increased workload.
If you found that the overall execution count of queries is higher than it used to be, tune indexes for your
highest CPU consuming queries and consider automatic index tuning. Evaluate if your max degree of
parallelism setting is correct for your increased workload.
If you found queries in the workload with parameter sensitive plan (PSP) problems, consider automatic plan
correction (force plan). You can also manually force a plan in Query Store or tune the Transact-SQL for the
query to result in a consistently high-performing query plan.
If you found evidence that a large amount of compilation or recompilation is occurring, tune the queries so
that they are properly parameterized or do not require recompile hints.
If you found that queries are using excessive parallelism, tune the max degree of parallelism.
Consider the following strategies in this section.
Reduce CPU usage with automatic index tuning
Effective index tuning reduces CPU usage for many queries. Optimized indexes reduce the logical and physical
reads for a query, which often results in the query needing to do less work.
Azure SQL Database offers automatic index management for workloads on primary replicas. Automatic index
management uses machine learning to monitor your workload and optimize rowstore disk-based nonclustered
indexes for your database.
Review performance recommendations, including index recommendations, in the Azure portal. You can apply
these recommendations manually or enable the CREATE INDEX automatic tuning option to create and verify the
performance of new indexes in your database.
Reduce CPU usage with automatic plan correction (force plan)
Another common cause of high CPU incidents is execution plan choice regression. Azure SQL Database offers
the force plan automatic tuning option to identify regressions in query execution plans in workloads on primary
replicas. With this automatic tuning feature enabled, Azure SQL Database will test if forcing a query execution
plan results in reliable improved performance for queries with execution plan regression.
If your database was created after March 2020, the force plan automatic tuning option was automatically
enabled. If your database was created prior to this time, you may wish to enable the force plan automatic tuning
option.
Tune indexes manually
Use the methods described in Identify the causes of high CPU to identify query plans for your top CPU
consuming queries. These execution plans will aid you in identifying and adding nonclustered indexes to speed
up your queries.
Each disk based nonclustered index in your database requires storage space and must be maintained by the SQL
engine. Modify existing indexes instead of adding new indexes when possible and ensure that new indexes
successfully reduce CPU usage. For an overview of nonclustered indexes, see Nonclustered Index Design
Guidelines.
For some workloads, columnstore indexes may be the best choice to reduce CPU of frequent read queries. See
Columnstore indexes - Design guidance for high-level recommendations on scenarios when columnstore
indexes may be appropriate.
Tune your application, queries, and database settings
In examining your top queries, you may find application characteristics to tune such as "chatty" behavior,
workloads that would benefit from sharding, and suboptimal database access design. For read-heavy workloads,
consider read-only replicas to offload read-only query workloads and application-tier caching as long-term
strategies to scale out frequently read data.
You may also choose to manually tune the top CPU using queries identified in your workload. Manual tuning
options include rewriting Transact-SQL statements, forcing plans in Query Store, and applying query hints.
If you identify cases where queries sometimes use an execution plan which is not optimal for performance,
review the solutions in queries that parameter sensitive plan (PSP) problems
If you identify non-parameterized queries with a high number of plans, consider parameterizing these queries,
making sure to fully declare parameter data types, including length and precision. This may be done by
modifying the queries, creating a plan guide to force parameterization of a specific query, or by enabling forced
parameterization at the database level.
If you identify queries with high compilation rates, identify what causes the frequent compilation. The most
common cause of frequent compilation is RECOMPILE hints. Whenever possible, identify when the RECOMPILE
hint was added and what problem it was meant to solve. Investigate whether an alternate performance tuning
solution can be implemented to provide consistent performance for frequently running queries without a
RECOMPILE hint.

Reduce CPU usage by tuning the max degree of parallelism


The max degree of parallelism (MAXDOP) setting controls intra-query parallelism in the database engine. Higher
MAXDOP values generally result in more parallel threads per query, and faster query execution.
In some cases, a large number of parallel queries running concurrently can slow down a workload and cause
high CPU usage. Excessive parallelism is most likely to occur in databases with a large number of vCores where
MAXDOP is set to a high number or to zero. When MAXDOP is set to zero, the database engine sets the number
of schedulers to be used by parallel threads to the total number of logical cores or 64, whichever is smaller.
You can identify the max degree of parallelism setting for your database with Transact-SQL. Connect to your
database with SSMS or Azure Data Studio and run the following query:
SELECT
name,
value,
value_for_secondary,
is_value_default
FROM sys.database_scoped_configurations
WHERE name=N'MAXDOP';
GO

Consider experimenting with small changes in the MAXDOP configuration at the database level, or modifying
individual problematic queries to use a non-default MAXDOP using a query hint. For more information, see the
examples in configure max degree of parallelism.

When to add CPU resources


You may find that your workload's queries and indexes are properly tuned, or that performance tuning requires
changes that you cannot make in the short term due to internal processes or other reasons. Adding more CPU
resources may be beneficial for these databases. You can scale database resources with minimal downtime.
You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the hardware
configuration for databases using the vCore purchasing model.
Under the DTU-based purchasing model, you can raise your service tier and increase the number of database
transaction units (DTUs). A DTU represents a blended measure of CPU, memory, reads, and writes. One benefit
of the vCore purchasing model is that it allows more granular control over the hardware in use and the number
of vCores. You can migrate Azure SQL Database from the DTU-based model to the vCore-based model to
transition between purchasing models.

Next steps
Learn more about monitoring and performance tuning Azure SQL Database in the following articles:
Monitoring Azure SQL Database and Azure SQL Managed Instance performance using dynamic
management views
SQL Server index architecture and design guide
Enable automatic tuning to monitor queries and improve workload performance
Query processing architecture guide
Best practices with Query Store
Detectable types of query performance bottlenecks in Azure SQL Database
Analyze and prevent deadlocks in Azure SQL Database
Understand and resolve Azure SQL Database
blocking problems
7/12/2022 • 27 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

Objective
The article describes blocking in Azure SQL databases and demonstrates how to troubleshoot and resolve
blocking.
In this article, the term connection refers to a single logged-on session of the database. Each connection appears
as a session ID (SPID) or session_id in many DMVs. Each of these SPIDs is often referred to as a process,
although it is not a separate process context in the usual sense. Rather, each SPID consists of the server
resources and data structures necessary to service the requests of a single connection from a given client. A
single client application may have one or more connections. From the perspective of Azure SQL Database, there
is no difference between multiple connections from a single client application on a single client computer and
multiple connections from multiple client applications or multiple client computers; they are atomic. One
connection can block another connection, regardless of the source client.
For information on troubleshooting deadlocks, see Analyze and prevent deadlocks in Azure SQL Database.

NOTE
This content is focused on Azure SQL Database. Azure SQL Database is based on the latest stable version of the
Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may
differ. For more on blocking in SQL Server, see Understand and resolve SQL Server blocking problems.

Understand blocking
Blocking is an unavoidable and by-design characteristic of any relational database management system
(RDBMS) with lock-based concurrency. Blocking in a database in Azure SQL Database occurs when one session
holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same
resource. Typically, the time frame for which the first SPID locks the resource is small. When the owning session
releases the lock, the second connection is then free to acquire its own lock on the resource and continue
processing. This is normal behavior and may happen many times throughout the course of a day with no
noticeable effect on system performance.
Each new database in Azure SQL Database has the read committed snapshot (RCSI) database setting enabled by
default. Blocking between sessions reading data and sessions writing data is minimized under RCSI, which uses
row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure
SQL Database because:
Queries that modify data may block one another.
Queries may run under isolation levels that increase blocking. Isolation levels may be specified in application
connection strings, query hints, or SET statements in Transact-SQL.
RCSI may be disabled, causing the database to use shared (S) locks to protect SELECT statements run under
the read committed isolation level. This may increase blocking and deadlocks.
Snapshot isolation level is also enabled by default for new databases in Azure SQL Database. Snapshot isolation
is an additional row-based isolation level that provides transaction-level consistency for data and which uses
row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their
transaction isolation level to SNAPSHOT . This may only be done when snapshot isolation is enabled for the
database.
You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in
Azure SQL Database and run the following query:

SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc


FROM sys.databases
WHERE name = DB_NAME();
GO

If RCSI is enabled, the is_read_committed_snapshot_on column will return the value 1 . If snapshot isolation is
enabled, the snapshot_isolation_state_desc column will return the value ON .
The duration and transaction context of a query determine how long its locks are held and, thereby, their effect
on other queries. SELECT statements run under RCSI do not acquire shared (S) locks on the data being read, and
therefore do not block transactions that are modifying data. For INSERT, UPDATE, and DELETE statements, the
locks are held during the query, both for data consistency and to allow the query to be rolled back if necessary.
For queries executed within an explicit transaction, the type of locks and duration for which the locks are held
are determined by the type of query, the transaction isolation level, and whether lock hints are used in the query.
For a description of locking, lock hints, and transaction isolation levels, see the following articles:
Locking in the Database Engine
Customizing Locking and Row Versioning
Lock Modes
Lock Compatibility
Transactions
When locking and blocking persists to the point where there is a detrimental effect on system performance, it is
due to one of the following reasons:
A SPID holds locks on a set of resources for an extended period of time before releasing them. This type
of blocking resolves itself over time but can cause performance degradation.
A SPID holds locks on a set of resources and never releases them. This type of blocking does not resolve
itself and prevents access to the affected resources indefinitely.
In the first scenario, the situation can be very fluid as different SPIDs cause blocking on different resources over
time, creating a moving target. These situations are difficult to troubleshoot using SQL Server Management
Studio to narrow down the issue to individual queries. In contrast, the second situation results in a consistent
state that can be easier to diagnose.

Applications and blocking


There may be a tendency to focus on server-side tuning and platform issues when facing a blocking problem.
However, attention paid only to the database may not lead to a resolution, and can absorb time and energy
better directed at examining the client application and the queries it submits. No matter what level of visibility
the application exposes regarding the database calls being made, a blocking problem nonetheless frequently
requires both the inspection of the exact SQL statements submitted by the application and the application's exact
behavior regarding query cancellation, connection management, fetching all result rows, and so on. If the
development tool does not allow explicit control over connection management, query cancellation, query time-
out, result fetching, and so on, blocking problems may not be resolvable. This potential should be closely
examined before selecting an application development tool for Azure SQL Database, especially for performance
sensitive OLTP environments.
Pay attention to database performance during the design and construction phase of the database and
application. In particular, the resource consumption, isolation level, and transaction path length should be
evaluated for each query. Each query and transaction should be as lightweight as possible. Good connection
management discipline must be exercised, without it, the application may appear to have acceptable
performance at low numbers of users, but the performance may degrade significantly as the number of users
scales upward.
With proper application and query design, Azure SQL Database is capable of supporting many thousands of
simultaneous users on a single server, with little blocking.

NOTE
For more application development guidance, see Troubleshooting connectivity issues and other errors with Azure SQL
Database and Azure SQL Managed Instance and Transient Fault Handling.

Troubleshoot blocking
Regardless of which blocking situation we are in, the methodology for troubleshooting locking is the same.
These logical separations are what will dictate the rest of the composition of this article. The concept is to find
the head blocker and identify what that query is doing and why it is blocking. Once the problematic query is
identified (that is, what is holding locks for the prolonged period), the next step is to analyze and determine why
the blocking is happening. After we understand the why, we can then make changes by redesigning the query
and the transaction.
Steps in troubleshooting:
1. Identify the main blocking session (head blocker)
2. Find the query and transaction that is causing the blocking (what is holding locks for a prolonged period)
3. Analyze/understand why the prolonged blocking occurs
4. Resolve blocking issue by redesigning query and transaction
Now let's dive in to discuss how to pinpoint the main blocking session with an appropriate data capture.

Gather blocking information


To counteract the difficulty of troubleshooting blocking problems, a database administrator can use SQL scripts
that constantly monitor the state of locking and blocking in the database in Azure SQL Database. To gather this
data, there are essentially two methods.
The first is to query dynamic management objects (DMOs) and store the results for comparison over time.
Some objects referenced in this article are dynamic management views (DMVs) and some are dynamic
management functions (DMFs). The second method is to use XEvents to capture what is executing.

Gather information from DMVs


Referencing DMVs to troubleshoot blocking has the goal of identifying the SPID (session ID) at the head of the
blocking chain and the SQL Statement. Look for victim SPIDs that are being blocked. If any SPID is being blocked
by another SPID, then investigate the SPID owning the resource (the blocking SPID). Is that owner SPID being
blocked as well? You can walk the chain to find the head blocker then investigate why it is maintaining its lock.
Remember to run each of these scripts in the target database in Azure SQL Database.
The sp_who and sp_who2 commands are older commands to show all current sessions. The DMV
sys.dm_exec_sessions returns more data in a result set that is easier to query and filter. You will find
sys.dm_exec_sessions at the core of other queries.

If you already have a particular session identified, you can use DBCC INPUTBUFFER(<session_id>) to find the
last statement that was submitted by a session. Similar results can be returned with the
sys.dm_exec_input_buffer dynamic management function (DMF), in a result set that is easier to query
and filter, providing the session_id and the request_id. For example, to return the most recent query
submitted by session_id 66 and request_id 0:

SELECT * FROM sys.dm_exec_input_buffer (66,0);

Refer to the blocking_session_id column in sys.dm_exec_requests . When blocking_session_id = 0, a


session is not being blocked. While sys.dm_exec_requests lists only requests currently executing, any
connection (active or not) will be listed in sys.dm_exec_sessions . Build on this common join between
sys.dm_exec_requests and sys.dm_exec_sessions in the next query.

Run this sample query to find the actively executing queries and their current SQL batch text or input
buffer text, using the sys.dm_exec_sql_text or sys.dm_exec_input_buffer DMVs. If the data returned by the
text field of sys.dm_exec_sql_text is NULL, the query is not currently executing. In that case, the
event_info field of sys.dm_exec_input_buffer will contain the last command string passed to the SQL
engine. This query can also be used to identify sessions blocking other sessions, including a list of
session_ids blocked per session_id.

WITH cteBL (session_id, blocking_these) AS


(SELECT s.session_id, blocking_these = x.blocking_these FROM sys.dm_exec_sessions s
CROSS APPLY (SELECT isnull(convert(varchar(6), er.session_id),'') + ', '
FROM sys.dm_exec_requests as er
WHERE er.blocking_session_id = isnull(s.session_id ,0)
AND er.blocking_session_id <> 0
FOR XML PATH('') ) AS x (blocking_these)
)
SELECT s.session_id, blocked_by = r.blocking_session_id, bl.blocking_these
, batch_text = t.text, input_buffer = ib.event_info, *
FROM sys.dm_exec_sessions s
LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id
INNER JOIN cteBL as bl on s.session_id = bl.session_id
OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) t
OUTER APPLY sys.dm_exec_input_buffer(s.session_id, NULL) AS ib
WHERE blocking_these is not null or r.blocking_session_id > 0
ORDER BY len(bl.blocking_these) desc, r.blocking_session_id desc, r.session_id;

Run this more elaborate sample query, provided by Microsoft Support, to identify the head of a multiple
session blocking chain, including the query text of the sessions involved in a blocking chain.
WITH cteHead ( session_id,request_id,wait_type,wait_resource,last_wait_type,is_user_process,request_cpu_time
,request_logical_reads,request_reads,request_writes,wait_time,blocking_session_id,memory_usage
,session_cpu_time,session_reads,session_writes,session_logical_reads
,percent_complete,est_completion_time,request_start_time,request_status,command
,plan_handle,sql_handle,statement_start_offset,statement_end_offset,most_recent_sql_handle
,session_status,group_id,query_hash,query_plan_hash)
AS ( SELECT sess.session_id, req.request_id, LEFT (ISNULL (req.wait_type, ''), 50) AS 'wait_type'
, LEFT (ISNULL (req.wait_resource, ''), 40) AS 'wait_resource', LEFT (req.last_wait_type, 50) AS
'last_wait_type'
, sess.is_user_process, req.cpu_time AS 'request_cpu_time', req.logical_reads AS 'request_logical_reads'
, req.reads AS 'request_reads', req.writes AS 'request_writes', req.wait_time,
req.blocking_session_id,sess.memory_usage
, sess.cpu_time AS 'session_cpu_time', sess.reads AS 'session_reads', sess.writes AS 'session_writes',
sess.logical_reads AS 'session_logical_reads'
, CONVERT (decimal(5,2), req.percent_complete) AS 'percent_complete', req.estimated_completion_time AS
'est_completion_time'
, req.start_time AS 'request_start_time', LEFT (req.status, 15) AS 'request_status', req.command
, req.plan_handle, req.[sql_handle], req.statement_start_offset, req.statement_end_offset,
conn.most_recent_sql_handle
, LEFT (sess.status, 15) AS 'session_status', sess.group_id, req.query_hash, req.query_plan_hash
FROM sys.dm_exec_sessions AS sess
LEFT OUTER JOIN sys.dm_exec_requests AS req ON sess.session_id = req.session_id
LEFT OUTER JOIN sys.dm_exec_connections AS conn on conn.session_id = sess.session_id
)
, cteBlockingHierarchy (head_blocker_session_id, session_id, blocking_session_id, wait_type,
wait_duration_ms,
wait_resource, statement_start_offset, statement_end_offset, plan_handle, sql_handle,
most_recent_sql_handle, [Level])
AS ( SELECT head.session_id AS head_blocker_session_id, head.session_id AS session_id,
head.blocking_session_id
, head.wait_type, head.wait_time, head.wait_resource, head.statement_start_offset,
head.statement_end_offset
, head.plan_handle, head.sql_handle, head.most_recent_sql_handle, 0 AS [Level]
FROM cteHead AS head
WHERE (head.blocking_session_id IS NULL OR head.blocking_session_id = 0)
AND head.session_id IN (SELECT DISTINCT blocking_session_id FROM cteHead WHERE blocking_session_id != 0)
UNION ALL
SELECT h.head_blocker_session_id, blocked.session_id, blocked.blocking_session_id, blocked.wait_type,
blocked.wait_time, blocked.wait_resource, h.statement_start_offset, h.statement_end_offset,
h.plan_handle, h.sql_handle, h.most_recent_sql_handle, [Level] + 1
FROM cteHead AS blocked
INNER JOIN cteBlockingHierarchy AS h ON h.session_id = blocked.blocking_session_id and
h.session_id!=blocked.session_id --avoid infinite recursion for latch type of blocking
WHERE h.wait_type COLLATE Latin1_General_BIN NOT IN ('EXCHANGE', 'CXPACKET') or h.wait_type is null
)
SELECT bh.*, txt.text AS blocker_query_or_most_recent_query
FROM cteBlockingHierarchy AS bh
OUTER APPLY sys.dm_exec_sql_text (ISNULL ([sql_handle], most_recent_sql_handle)) AS txt;

To catch long-running or uncommitted transactions, use another set of DMVs for viewing current open
transactions, including sys.dm_tran_database_transactions, sys.dm_tran_session_transactions,
sys.dm_exec_connections, and sys.dm_exec_sql_text. There are several DMVs associated with tracking
transactions, see more DMVs on transactions here.

SELECT [s_tst].[session_id],
[database_name] = DB_NAME (s_tdt.database_id),
[s_tdt].[database_transaction_begin_time],
[sql_text] = [s_est].[text]
FROM sys.dm_tran_database_transactions [s_tdt]
INNER JOIN sys.dm_tran_session_transactions [s_tst] ON [s_tst].[transaction_id] = [s_tdt].[transaction_id]
INNER JOIN sys.dm_exec_connections [s_ec] ON [s_ec].[session_id] = [s_tst].[session_id]
CROSS APPLY sys.dm_exec_sql_text ([s_ec].[most_recent_sql_handle]) AS [s_est];

Reference sys.dm_os_waiting_tasks that is at the thread/task layer of SQL. This returns information about
what SQL wait type the request is currently experiencing. Like sys.dm_exec_requests , only active requests are
returned by sys.dm_os_waiting_tasks .

NOTE
For much more on wait types including aggregated wait stats over time, see the DMV sys.dm_db_wait_stats. This DMV
returns aggregate wait stats for the current database only.

Use the sys.dm_tran_locks DMV for more granular information on what locks have been placed by queries.
This DMV can return large amounts of data on a production database, and is useful for diagnosing what
locks are currently held.
Due to the INNER JOIN on sys.dm_os_waiting_tasks , the following query restricts the output from
sys.dm_tran_locks only to currently blocked requests, their wait status, and their locks:

SELECT table_name = schema_name(o.schema_id) + '.' + o.name


, wt.wait_duration_ms, wt.wait_type, wt.blocking_session_id, wt.resource_description
, tm.resource_type, tm.request_status, tm.request_mode, tm.request_session_id
FROM sys.dm_tran_locks AS tm
INNER JOIN sys.dm_os_waiting_tasks as wt ON tm.lock_owner_address = wt.resource_address
LEFT OUTER JOIN sys.partitions AS p on p.hobt_id = tm.resource_associated_entity_id
LEFT OUTER JOIN sys.objects o on o.object_id = p.object_id or tm.resource_associated_entity_id = o.object_id
WHERE resource_database_id = DB_ID()
AND object_name(p.object_id) = '<table_name>';

With DMVs, storing the query results over time will provide data points that will allow you to review blocking
over a specified time interval to identify persisted blocking or trends.

Gather information from Extended Events


In addition to the previous information, it is often necessary to capture a trace of the activities on the server to
thoroughly investigate a blocking problem on Azure SQL Database. For example, if a session executes multiple
statements within a transaction, only the last statement that was submitted will be represented. However, one of
the earlier statements may be the reason locks are still being held. A trace will enable you to see all the
commands executed by a session within the current transaction.
There are two ways to capture traces in SQL Server; Extended Events (XEvents) and Profiler Traces. However, SQL
Server Profiler is deprecated trace technology not supported for Azure SQL Database. Extended Events is the
newer tracing technology that allows more versatility and less impact to the observed system, and its interface
is integrated into SQL Server Management Studio (SSMS).
Refer to the document that explains how to use the Extended Events New Session Wizard in SSMS. For Azure
SQL databases however, SSMS provides an Extended Events subfolder under each database in Object Explorer.
Use an Extended Events session wizard to capture these useful events:
Category Errors:
Attention
Error_reported
Execution_warning
Category Warnings:
Missing_join_predicate
Category Execution:
Rpc_completed
Rpc_starting
Sql_batch_completed
Sql_batch_starting
Category deadlock_monitor
database_xml_deadlock_report
Category session
Existing_connection
Login
Logout

NOTE
For detailed information on deadlocks, see Analyze and prevent deadlocks in Azure SQL Database.

Identify and resolve common blocking scenarios


By examining the previous information, you can determine the cause of most blocking problems. The rest of this
article is a discussion of how to use this information to identify and resolve some common blocking scenarios.
This discussion assumes you have used the blocking scripts (referenced earlier) to capture information on the
blocking SPIDs and have captured application activity using an XEvent session.

Analyze blocking data


Examine theoutput of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions to determine the heads
of the blocking chains, using blocking_these and session_id . This will most clearly identify which
requests are blocked and which are blocking. Look further into the sessions that are blocked and
blocking. Is there a common or root to the blocking chain? They likely share a common table, and one or
more of the sessions involved in a blocking chain is performing a write operation.
Examine theoutput of the DMVs sys.dm_exec_requests and sys.dm_exec_sessions for information on the
SPIDs at the head of the blocking chain. Look for the following fields:
sys.dm_exec_requests.status
This column shows the status of a particular request. Typically, asleepingstatus indicates that the SPID
has completed execution and is waiting for the application to submit another query or batch.
Arunnable orrunningstatus indicates that the SPID is currently processing a query. The following table
gives brief explanations of the various status values.

STAT US M EA N IN G

Background The SPID is running a background task, such as deadlock


detection, log writer, or checkpoint.

Sleeping The SPID is not currently executing. This usually indicates


that the SPID is awaiting a command from the
application.

Running The SPID is currently running on a scheduler.

Runnable The SPID is in the runnable queue of a scheduler and


waiting to get scheduler time.
STAT US M EA N IN G

Suspended The SPID is waiting for a resource, such as a lock or a


latch.

sys.dm_exec_sessions.open_transaction_count
This field tells you the number of open transactions in this session. If this value is greater than 0,
the SPID is within an open transaction and may be holding locks acquired by any statement within
the transaction.
sys.dm_exec_requests.open_transaction_count
Similarly, this field tells you the number of open transactions in this request. If this value is greater
than 0, the SPID is within an open transaction and may be holding locks acquired by any statement
within the transaction.
sys.dm_exec_requests.wait_type , , and last_wait_type
wait_time
If the sys.dm_exec_requests.wait_type is NULL, the request is not currently waiting for anything and
the last_wait_type value indicates the last wait_type that the request encountered. For more
information about sys.dm_os_wait_stats and a description of the most commonwaittypes, see
sys.dm_os_wait_stats. The wait_time value can be used to determine if the request is making
progress. When a query against the sys.dm_exec_requests table returns a value in the wait_time
column that is less than the wait_time value from a previous query of sys.dm_exec_requests , this
indicates that the prior lock was acquired and released and is now waiting on a new lock
(assuming non-zero wait_time ). This can be verified by comparing the wait_resource between
sys.dm_exec_requests output, which displays the resource for which the request is waiting.

sys.dm_exec_requests.wait_resource This field indicates the resource that a blocked request is


waiting on. The following table lists common wait_resource formats and their meaning:

RESO URC E F O RM AT EXA M P L E EXP L A N AT IO N

Table DatabaseID:ObjectID:Inde TAB: 5:261575970:1 In this case, database ID 5


xID is thepubssample
database and object ID
261575970 is
thetitlestable and 1 is the
clustered index.

Page DatabaseID:FileID:PageID PAGE: 5:1:104 In this case, database ID 5


ispubs, file ID 1 is the
primary data file, and
page 104 is a page
belonging to
thetitlestable. To identify
the object_id the page
belongs to, use the
dynamic management
function
sys.dm_db_page_info,
passing in the
DatabaseID, FileId, PageId
from the wait_resource
.
RESO URC E F O RM AT EXA M P L E EXP L A N AT IO N

Key DatabaseID:Hobt_id KEY: In this case, database ID 5


(Hash value for index key) 5:72057594044284928 is Pubs, Hobt_ID
(3300a4f361aa) 72057594044284928
corresponds to index_id 2
for object_id 261575970
(titlestable). Use the
sys.partitions catalog
view to associate the
hobt_id to a particular
index_id and
object_id . There is no
way to unhash the index
key hash to a specific key
value.

Row DatabaseID:FileID:PageID: RID: 5:1:104:3 In this case, database ID 5


Slot(row) is pubs, file ID 1 is the
primary data file, page
104 is a page belonging
to the titles table, and slot
3 indicates the row's
position on the page.

Compile DatabaseID:FileID:PageID: RID: 5:1:104:3 In this case, database ID 5


Slot(row) is pubs, file ID 1 is the
primary data file, page
104 is a page belonging
to the titles table, and slot
3 indicates the row's
position on the page.

sys.dm_tran_active_transactions The sys.dm_tran_active_transactions DMV contains data about open


transactions that can be joined to other DMVs for a complete picture of transactions awaiting commit
or rollback. Use the following query to return information on open transactions, joined to other DMVs
including sys.dm_tran_session_transactions. Consider a transaction's current state,
transaction_begin_time , and other situational data to evaluate whether it could be a source of
blocking.
SELECT tst.session_id, [database_name] = db_name(s.database_id)
, tat.transaction_begin_time
, transaction_duration_s = datediff(s, tat.transaction_begin_time, sysdatetime())
, transaction_type = CASE tat.transaction_type WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction' END
, input_buffer = ib.event_info, tat.transaction_uow
, transaction_state = CASE tat.transaction_state
WHEN 0 THEN 'The transaction has not been completely initialized yet.'
WHEN 1 THEN 'The transaction has been initialized but has not started.'
WHEN 2 THEN 'The transaction is active - has not been committed or rolled back.'
WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
WHEN 6 THEN 'The transaction has been committed.'
WHEN 7 THEN 'The transaction is being rolled back.'
WHEN 8 THEN 'The transaction has been rolled back.' END
, transaction_name = tat.name, request_status = r.status
, azure_dtc_state = CASE tat.dtc_state
WHEN 1 THEN 'ACTIVE'
WHEN 2 THEN 'PREPARED'
WHEN 3 THEN 'COMMITTED'
WHEN 4 THEN 'ABORTED'
WHEN 5 THEN 'RECOVERED' END
, tst.is_user_transaction, tst.is_local
, session_open_transaction_count = tst.open_transaction_count
, s.host_name, s.program_name, s.client_interface_name, s.login_name, s.is_user_process
FROM sys.dm_tran_active_transactions tat
INNER JOIN sys.dm_tran_session_transactions tst on tat.transaction_id = tst.transaction_id
INNER JOIN sys.dm_exec_sessions s on s.session_id = tst.session_id
LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id
CROSS APPLY sys.dm_exec_input_buffer(s.session_id, null) AS ib;

Other columns
The remainingcolumns in sys.dm_exec_sessions and sys.dm_exec_request can provide insight into
the root of a problem as well. Their usefulness varies depending on the circumstances of the
problem. For example, you can determine if the problem happens only from certain clients
(hostname), on certain network libraries (net_library), when the last batch submitted by a SPID
was last_request_start_time in sys.dm_exec_sessions , how long a request had been running
using start_time in sys.dm_exec_requests , and so on.

Common blocking scenarios


The table below maps common symptoms to their probable causes.
The Waittype, Open_Tran, and Status columns refer to information returned by sys.dm_exec_request, other
columns may be returned by sys.dm_exec_sessions. The "Resolves?" column indicates whether or not the
blocking will resolve on its own, or whether the session should be killed via the KILL command. For more
information, see KILL (Transact-SQL).

OT H ER
SC EN A RIO WA IT T Y P E O P EN _T RA N STAT US RESO LVES? SY M P TO M S
OT H ER
SC EN A RIO WA IT T Y P E O P EN _T RA N STAT US RESO LVES? SY M P TO M S

1 NOT NULL >= 0 runnable Yes, when query In


finishes. sys.dm_exec_sessions
, reads ,
cpu_time ,
and/or
memory_usage
columns will
increase over
time. Duration
for the query will
be high when
completed.

2 NULL >0 sleeping No, but SPID can An attention


be killed. signal may be
seen in the
Extended Event
session for this
SPID, indicating a
query time-out
or cancel has
occurred.

3 NULL >= 0 runnable No. Will not If


resolve until open_transaction
client fetches all _count = 0, and
rows or closes the SPID holds
connection. SPID locks while the
can be killed, but transaction
it may take up to isolation level is
30 seconds. default (READ
COMMMITTED),
this is a likely
cause.

4 Varies >= 0 runnable No. Will not The hostname


resolve until column in
client cancels sys.dm_exec_sessions
queries or closes for the SPID at
connections. the head of a
SPIDs can be blocking chain
killed, but may will be the same
take up to 30 as one of the
seconds. SPID it is
blocking.

5 NULL >0 rollback Yes. An attention


signal may be
seen in the
Extended Events
session for this
SPID, indicating a
query time-out
or cancel has
occurred, or
simply a rollback
statement has
been issued.
OT H ER
SC EN A RIO WA IT T Y P E O P EN _T RA N STAT US RESO LVES? SY M P TO M S

6 NULL >0 sleeping Eventually. When The


Windows NT last_request_start_time
determines the value in
session is no sys.dm_exec_sessions
longer active, the is much earlier
Azure SQL than the current
Database time.
connection will
be broken.

Detailed blocking scenarios


1. Blocking caused by a normally running query with a long execution time
Resolution : The solution to this type of blocking problem is to look for ways to optimize the query.
Actually, this class of blocking problem may just be a performance problem, and require you to pursue it
as such. For information on troubleshooting a specific slow-running query, see How to troubleshoot
slow-running queries on SQL Server. For more information, see Monitor and Tune for Performance.
Reports from the Query Store in SSMS are also a highly recommended and valuable tool for identifying
the most costly queries, suboptimal execution plans. Also review the Intelligent Performance section of
the Azure portal for the Azure SQL database, including Query Performance Insight.
If the query performs only SELECT operations, consider running the statement under snapshot isolation if
it is enabled in your database, especially if RCSI has been disabled. As when RCSI is enabled, queries
reading data do not require shared (S) locks under snapshot isolation level. Additionally, snapshot
isolation provides transaction level consistency for all statements in an explicit multi-statement
transaction. Snapshot isolation may already be enabled in your database. Snapshot isolation may also be
used with queries performing modifications, but you must handle update conflicts.
If you have a long-running query that is blocking other users and cannot be optimized, consider moving
it from an OLTP environment to a dedicated reporting system, a synchronous read-only replica of the
database.
2. Blocking caused by a sleeping SPID that has an uncommitted transaction
This type of blocking can often be identified by a SPID that is sleeping or awaiting a command, yet whose
transaction nesting level ( @@TRANCOUNT , open_transaction_count from sys.dm_exec_requests ) is greater
than zero. This can occur if the application experiences a query time-out, or issues a cancel without also
issuing the required number of ROLLBACK and/or COMMIT statements. When a SPID receives a query
time-out or a cancel, it will terminate the current query and batch, but does not automatically roll back or
commit the transaction. The application is responsible for this, as Azure SQL Database cannot assume
that an entire transaction must be rolled back due to a single query being canceled. The query time-out
or cancel will appear as an ATTENTION signal event for the SPID in the Extended Event session.
To demonstrate an uncommitted explicit transaction, issue the following query:

CREATE TABLE #test (col1 INT);


INSERT INTO #test SELECT 1;
BEGIN TRAN
UPDATE #test SET col1 = 2 where col1 = 1;

Then, execute this query in the same window:


SELECT @@TRANCOUNT;
ROLLBACK TRAN
DROP TABLE #test;

The output of the second query indicates that the transaction nesting level is one. All the locks acquired in
the transaction are still be held until the transaction was committed or rolled back. If applications
explicitly open and commit transactions, a communication or other error could leave the session and its
transaction in an open state.
Use the script earlier in this article based on sys.dm_tran_active_transactions to identify currently
uncommitted transactions across the instance.
Resolutions :
Additionally, this class of blocking problem may also be a performance problem, and require you
to pursue it as such. If the query execution time can be diminished, the query time-out or cancel
would not occur. It is important that the application is able to handle the time-out or cancel
scenarios should they arise, but you may also benefit from examining the performance of the
query.
Applications must properly manage transaction nesting levels, or they may cause a blocking
problem following the cancellation of the query in this manner. Consider the following:
In the error handler of the client application, execute IF @@TRANCOUNT > 0 ROLLBACK TRAN
following any error, even if the client application does not believe a transaction is open.
Checking for open transactions is required, because a stored procedure called during the batch
could have started a transaction without the client application's knowledge. Certain conditions,
such as canceling the query, prevent the procedure from executing past the current statement,
so even if the procedure has logic to check IF @@ERROR <> 0 and abort the transaction, this
rollback code will not be executed in such cases.
If connection pooling is being used in an application that opens the connection and runs a
small number of queries before releasing the connection back to the pool, such as a Web-based
application, temporarily disabling connection pooling may help alleviate the problem until the
client application is modified to handle the errors appropriately. By disabling connection
pooling, releasing the connection will cause a physical disconnect of the Azure SQL Database
connection, resulting in the server rolling back any open transactions.
Use SET XACT_ABORT ON for the connection, or in any stored procedures that begin transactions
and are not cleaning up following an error. In the event of a run-time error, this setting will
abort any open transactions and return control to the client. For more information, review SET
XACT_ABORT (Transact-SQL).

NOTE
The connection is not reset until it is reused from the connection pool, so it is possible that a user could open a
transaction and then release the connection to the connection pool, but it might not be reused for several
seconds, during which time the transaction would remain open. If the connection is not reused, the transaction
will be aborted when the connection times out and is removed from the connection pool. Thus, it is optimal for
the client application to abort transactions in their error handler or use SET XACT_ABORT ON to avoid this
potential delay.

Cau t i on

Following SET XACT_ABORT ON , T-SQL statements following a statement that causes an error will not be
executed. This could affect the intended flow of existing code.
3. Blocking caused by a SPID whose corresponding client application did not fetch all result rows to
completion
After sending a query to the server, all applications must immediately fetch all result rows to completion.
If an application does not fetch all result rows, locks can be left on the tables, blocking other users. If you
are using an application that transparently submits SQL statements to the server, the application must
fetch all result rows. If it does not (and if it cannot be configured to do so), you may be unable to resolve
the blocking problem. To avoid the problem, you can restrict poorly behaved applications to a reporting
or a decision-support database, separate from the main OLTP database.
The impact of this scenario is reduced when read committed snapshot is enabled on the database, which
is the default configuration in Azure SQL Database. Learn more in the Understand blocking section of this
article.

NOTE
See guidance for retry logic for applications connecting to Azure SQL Database.

Resolution : The application must be rewritten to fetch all rows of the result to completion. This does not
rule out the use of OFFSET and FETCH in the ORDER BY clause of a query to perform server-side paging.
4. Blocking caused by a session in a rollback state
A data modification query that is KILLed, or canceled outside of a user-defined transaction, will be rolled
back. This can also occur as a side effect of the client network session disconnecting, or when a request is
selected as the deadlock victim. This can often be identified by observing the output of
sys.dm_exec_requests , which may indicate the ROLLBACK command, and the percent_complete column
may show progress.
Thanks to the Accelerated Database Recovery feature introduced in 2019, lengthy rollbacks should be
rare.
Resolution : Wait for the SPID to finish rolling back the changes that were made.
To avoid this situation, do not perform large batch write operations or index creation or maintenance
operations during busy hours on OLTP systems. If possible, perform such operations during periods of
low activity.
5. Blocking caused by an orphaned connection
If the client application traps errors or the client workstation is restarted, the network session to the
server may not be immediately canceled under some conditions. From the Azure SQL Database
perspective, the client still appears to be present, and any locks acquired may still be retained. For more
information, see How to troubleshoot orphaned connections in SQL Server.
Resolution : If the client application has disconnected without appropriately cleaning up its resources,
you can terminate the SPID by using the KILL command. The KILL command takes the SPID value as
input. For example, to kill SPID 99, issue the following command:

KILL 99

See also
Analyze and prevent deadlocks in Azure SQL Database
Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance
Monitoring performance by using the Query Store
Transaction Locking and Row Versioning Guide
SET TRANSACTION ISOLATION LEVEL
Quickstart: Extended events in SQL Server
Intelligent Insights using AI to monitor and troubleshoot database performance

Next steps
Azure SQL Database: Improving Performance Tuning with Automatic Tuning
Deliver consistent performance with Azure SQL
Troubleshooting connectivity issues and other errors with Azure SQL Database and Azure SQL Managed
Instance
Transient Fault Handling
Configure the max degree of parallelism (MAXDOP) in Azure SQL Database
Diagnose and troubleshoot high CPU on Azure SQL Database
Analyze and prevent deadlocks in Azure SQL
Database
7/12/2022 • 37 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article teaches you how to identify deadlocks in Azure SQL Database, use deadlock graphs and Query Store
to identify the queries in the deadlock, and plan and test changes to prevent deadlocks from reoccurring.
This article focuses on identifying and analyzing deadlocks due to lock contention. Learn more about other types
of deadlocks in resources that can deadlock.

How deadlocks occur in Azure SQL Database


Each new database in Azure SQL Database has the read committed snapshot (RCSI) database setting enabled by
default. Blocking between sessions reading data and sessions writing data is minimized under RCSI, which uses
row versioning to increase concurrency. However, blocking and deadlocks may still occur in databases in Azure
SQL Database because:
Queries that modify data may block one another.
Queries may run under isolation levels that increase blocking. Isolation levels may be specified via client
library methods, query hints, or SET statements in Transact-SQL.
RCSI may be disabled, causing the database to use shared (S) locks to protect SELECT statements run under
the read committed isolation level. This may increase blocking and deadlocks.
An example deadlock
A deadlock occurs when two or more tasks permanently block one another because each task has a lock on a
resource the other task is trying to lock. A deadlock is also called a cyclic dependency: in the case of a two-task
deadlock, transaction A has a dependency on transaction B, and transaction B closes the circle by having a
dependency on transaction A.
For example:
1. Session A begins an explicit transaction and runs an update statement that acquires an update (U) lock on
one row on table SalesLT.Product that is converted to an exclusive (X) lock.
2. Session B runs an update statement that modifies the SalesLT.ProductDescription table. The update
statement joins to the SalesLT.Product table to find the correct rows to update.
Session B acquires an update (U) lock on 72 rows on the SalesLT.ProductDescription table.
Session B needs a shared lock on rows on the table SalesLT.Product , including the row that is locked
by Session A . Session B is blocked on SalesLT.Product .
3. Session A continues its transaction, and now runs an update against the SalesLT.ProductDescription table.
Session A is blocked by Session B on SalesLT.ProductDescription .
All transactions in a deadlock will wait indefinitely unless one of the participating transactions is rolled back, for
example, because its session was terminated.
The database engine deadlock monitor periodically checks for tasks that are in a deadlock. If the deadlock
monitor detects a cyclic dependency, it chooses one of the tasks as a victim and terminates its transaction with
error 1205, "Transaction (Process ID N) was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction." Breaking the deadlock in this way allows the other task or
tasks in the deadlock to complete their transactions.

NOTE
Learn more about the criteria for choosing a deadlock victim in the Deadlock process list section of this article.

The application with the transaction chosen as the deadlock victim should retry the transaction, which usually
completes after the other transaction or transactions involved in the deadlock have finished.
It is a best practice to introduce a short, randomized delay before retry to avoid encountering the same deadlock
again. Learn more about how to design retry logic for transient errors.
Default isolation level in Azure SQL Database
New databases in Azure SQL Database enable read committed snapshot (RCSI) by default. RCSI changes the
behavior of the read committed isolation level to use row-versioning to provide statement-level consistency
without the use of shared (S) locks for SELECT statements.
With RCSI enabled:
Statements reading data do not block statements modifying data.
Statements modifying data do not block statements reading data.
Snapshot isolation level is also enabled by default for new databases in Azure SQL Database. Snapshot isolation
is an additional row-based isolation level that provides transaction-level consistency for data and which uses
row versions to select rows to update. To use snapshot isolation, queries or connections must explicitly set their
transaction isolation level to SNAPSHOT . This may only be done when snapshot isolation is enabled for the
database.
You can identify if RCSI and/or snapshot isolation are enabled with Transact-SQL. Connect to your database in
Azure SQL Database and run the following query:

SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc


FROM sys.databases
WHERE name = DB_NAME();
GO

If RCSI is enabled, the is_read_committed_snapshot_on column will return the value 1 . If snapshot isolation is
enabled, the snapshot_isolation_state_desc column will return the value ON .
If RCSI has been disabled for a database in Azure SQL Database, investigate why RCSI was disabled before re-
enabling it. Application code may have been written expecting that queries reading data will be blocked by
queries writing data, resulting in incorrect results from race conditions when RCSI is enabled.
Interpreting deadlock events
A deadlock event is emitted after the deadlock manager in Azure SQL Database detects a deadlock and selects a
transaction as the victim. In other words, if you set up alerts for deadlocks, the notification fires after an
individual deadlock has been resolved. There is no user action that needs to be taken for that deadlock.
Applications should be written to include retry logic so that they automatically continue after receiving error
1205, "Transaction (Process ID N) was deadlocked on lock resources with another process and has been chosen
as the deadlock victim. Rerun the transaction."
It's useful to set up alerts, however, as deadlocks may reoccur. Deadlock alerts enable you to investigate if a
pattern of repeat deadlocks is happening in your database, in which case you may choose to take action to
prevent deadlocks from reoccurring. Learn more about alerting in the Monitor and alert on deadlocks section of
this article.
Top methods to prevent deadlocks
The lowest risk approach to preventing deadlocks from reoccurring is generally to tune nonclustered indexes to
optimize queries involved in the deadlock.
Risk is low for this approach because tuning nonclustered indexes doesn't require changes to the query code
itself, reducing the risk of a user error when rewriting Transact-SQL that causes incorrect data to be returned
to the user.
Effective nonclustered index tuning helps queries find the data to read and modify more efficiently. By
reducing the amount of data that a query needs to access, the likelihood of blocking is reduced and
deadlocks can often be prevented.
In some cases, creating or tuning a clustered index can reduce blocking and deadlocks. Because the clustered
index is included in all nonclustered index definitions, creating or modifying a clustered index can be an IO
intensive and time consuming operation on larger tables with existing nonclustered indexes. Learn more about
Clustered index design guidelines.
When index tuning isn't successful at preventing deadlocks, other methods are available:
If the deadlock occurs only when a particular plan is chosen for one of the queries involved in the deadlock,
forcing a query plan with Query Store may prevent deadlocks from reoccurring.
Rewriting Transact-SQL for one or more transactions involved in the deadlock can also help prevent
deadlocks. Breaking apart explicit transactions into smaller transactions requires careful coding and testing
to ensure data validity when concurrent modifications occur.
Learn more about each of these approaches in the Prevent a deadlock from reoccurring section of this article.

Monitor and alert on deadlocks


In this article, we will use the AdventureWorksLT sample database to set up alerts for deadlocks, cause an
example deadlock, analyze the deadlock graph for the example deadlock, and test changes to prevent the
deadlock from reoccurring.
We'll use the SQL Server Management Studio (SSMS) client in this article, as it contains functionality to display
deadlock graphs in an interactive visual mode. You can use other clients such as Azure Data Studio to follow
along with the examples, but you may only be able to view deadlock graphs as XML.
Create the AdventureWorksLT database
To follow along with the examples, create a new database in Azure SQL Database and select Sample data as the
Data source .
For detailed instructions on how to create AdventureWorksLT with the Azure portal, Azure CLI, or PowerShell,
select the approach of your choice in Quickstart: Create an Azure SQL Database single database.
Set up deadlock alerts in the Azure portal
To set up alerts for deadlock events, follow the steps in the article Create alerts for Azure SQL Database and
Azure Synapse Analytics using the Azure portal.
Select Deadlocks as the signal name for the alert. Configure the Action group to notify you using the method
of your choice, such as the Email/SMS/Push/Voice action type.

Collect deadlock graphs in Azure SQL Database with Extended Events


Deadlock graphs are a rich source of information regarding the processes and locks involved in a deadlock. To
collect deadlock graphs with Extended Events (XEvents) in Azure SQL Database, capture the
sqlserver.database_xml_deadlock_report event.

You can collect deadlock graphs with XEvents using either the ring buffer target or an event file target.
Considerations for selecting the appropriate target type are summarized in the following table:

A P P RO A C H B EN EF IT S C O N SIDERAT IO N S USA GE SC EN A RIO S


A P P RO A C H B EN EF IT S C O N SIDERAT IO N S USA GE SC EN A RIO S

Ring buffer target Simple setup with Event data is cleared Collect sample trace
Transact-SQL only. when the XEvents data for testing and
session is stopped learning.
for any reason, such Create for short
as taking the term needs if you
database offline or a cannot set up a
database failover. session using an
Database resources event file target
are used to maintain immediately.
data in the ring Use as a "landing
buffer and to query pad" for trace data,
session data. when you have set
up an automated
process to persist
trace data into a
table.

Event file target Persists event data Setup is more General use when
to a blob in Azure complex and you want event data
Storage so data is requires to persist even after
available even after configuration of an the event session
the session is Azure Storage stops.
stopped. container and You want to run a
Event files may be database scoped trace that generates
downloaded from credential. larger amounts of
the Azure portal or event data than you
Azure Storage would like to persist
Explorer and in memory.
analyzed locally,
which does not
require using
database resources
to query session
data.

Select the target type you would like to use:

Ring buffer target


Event file target

The ring buffer target is convenient and easy to set up, but has a limited capacity, which can cause older events
to be lost. The ring buffer does not persist events to storage and the ring buffer target is cleared when the
XEvents session is stopped. This means that any XEvents collected will not be available when the database
engine restarts for any reason, such as a failover. The ring buffer target is best suited to learning and short-term
needs if you do not have the ability to set up an XEvents session to an event file target immediately.
This sample code creates an XEvents session that captures deadlock graphs in memory using the ring buffer
target. The maximum memory allowed for the ring buffer target is 4 MB, and the session will automatically run
when the database comes online, such as after a failover.
To create and then start a XEvents session for the sqlserver.database_xml_deadlock_report event that writes to
the ring buffer target, connect to your database and run the following Transact-SQL:
CREATE EVENT SESSION [deadlocks] ON DATABASE
ADD EVENT sqlserver.database_xml_deadlock_report
ADD TARGET package0.ring_buffer
WITH (STARTUP_STATE=ON, MAX_MEMORY=4 MB)
GO

ALTER EVENT SESSION [deadlocks] ON DATABASE


STATE = START;
GO

Cause a deadlock in AdventureWorksLT


NOTE
This example works in the AdventureWorksLT database with the default schema and data when RCSI has been enabled.
See Create the AdventureWorksLT database for instructions to create the database.

To cause a deadlock, you will need to connect two sessions to the AdventureWorksLT database. We'll refer to
these sessions as Session A and Session B .
In Session A , run the following Transact-SQL. This code begins an explicit transaction and runs a single
statement that updates the SalesLT.Product table. To do this, the transaction acquires an update (U) lock on one
row on table SalesLT.Product which is converted to an exclusive (X) lock. We leave the transaction open.

BEGIN TRAN

UPDATE SalesLT.Product SET SellEndDate = SellEndDate + 1


WHERE Color = 'Red';

Now, in Session B , run the following Transact-SQL. This code doesn't explicitly begin a transaction. Instead, it
operates in autocommit transaction mode. This statement updates the SalesLT.ProductDescription table. The
update will take out an update (U) lock on 72 rows on the SalesLT.ProductDescription table. The query joins to
other tables, including the SalesLT.Product table.

UPDATE SalesLT.ProductDescription SET Description = Description


FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Silver';

To complete this update, Session B needs a shared (S) lock on rows on the table SalesLT.Product , including the
row that is locked by Session A . Session B will be blocked on SalesLT.Product .
Return to Session A . Run the following Transact-SQL statement. This runs a second UPDATE statement as part
of the open transaction.
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Red';

The second update statement in Session A will be blocked by Session B on the SalesLT.ProductDescription .
Session A and Session B are now mutually blocking one another. Neither transaction can proceed, as they
each need a resource that is locked by the other.
After a few seconds, the deadlock monitor will identify that the transactions in Session A and Session B are
mutually blocking one another, and that neither can make progress. You should see a deadlock occur, with
Session A chosen as the deadlock victim. An error message will appear in Session A with text similar to the
following:

Msg 1205, Level 13, State 51, Line 7 Transaction (Process ID 91) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the transaction.

Session B will complete successfully.


If you set up deadlock alerts in the Azure portal, you should receive a notification shortly after the deadlock
occurs.

View deadlock graphs from an XEvents session


If you have set up an XEvents session to collect deadlocks and a deadlock has occurred after the session was
started, you can view an interactive graphic display of the deadlock graph as well as the XML for the deadlock
graph.
Different methods are available to obtain deadlock information for the ring buffer target and event file targets.
Select the target you used for your XEvents session:

Ring buffer target


Event file target

If you set up an XEvents session writing to the ring buffer, you can query deadlock information with the
following Transact-SQL. Before running the query, replace the value of @tracename with the name of your
xEvents session.
DECLARE @tracename sysname = N'deadlocks';

WITH ring_buffer AS (
SELECT CAST(target_data AS XML) as rb
FROM sys.dm_xe_database_sessions AS s
JOIN sys.dm_xe_database_session_targets AS t
ON CAST(t.event_session_address AS BINARY(8)) = CAST(s.address AS BINARY(8))
WHERE s.name = @tracename and
t.target_name = N'ring_buffer'
), dx AS (
SELECT
dxdr.evtdata.query('.') as deadlock_xml_deadlock_report
FROM ring_buffer
CROSS APPLY rb.nodes('/RingBufferTarget/event[@name=''database_xml_deadlock_report'']') AS dxdr(evtdata)
)
SELECT
d.query('/event/data[@name=''deadlock_cycle_id'']/value').value('(/value)[1]', 'int') AS
[deadlock_cycle_id],
d.value('(/event/@timestamp)[1]', 'DateTime2') AS [deadlock_timestamp],
d.query('/event/data[@name=''database_name'']/value').value('(/value)[1]', 'nvarchar(256)') AS
[database_name],
d.query('/event/data[@name=''xml_report'']/value/deadlock') AS deadlock_xml,
LTRIM(RTRIM(REPLACE(REPLACE(d.value('.', 'nvarchar(2000)'),CHAR(10),' '),CHAR(13),' '))) as query_text
FROM dx
CROSS APPLY deadlock_xml_deadlock_report.nodes('(/event/data/value/deadlock/process-list/process/inputbuf)')
AS ib(d)
ORDER BY [deadlock_timestamp] DESC;
GO

View and save a deadlock graph in XML


Viewing a deadlock graph in XML format allows you to copy the inputbuffer of Transact-SQL statements
involved in the deadlock. You may also prefer to analyze deadlocks in a text-based format.
If you have used a Transact-SQL query to return deadlock graph information, to view the deadlock graph XML,
select the value in the deadlock_xml column from any row to open the deadlock graph's XML in a new window
in SSMS.
The XML for this example deadlock graph is:

<deadlock>
<victim-list>
<victimProcess id="process24756e75088" />
</victim-list>
<process-list>
<process id="process24756e75088" taskpriority="0" logused="6528" waitresource="KEY: 8:72057594045202432
(98ec012aa510)" waittime="192" ownerId="1011123" transactionname="user_transaction" lasttranstarted="2022-
03-08T15:44:43.490" XDES="0x2475c980428" lockMode="U" schedulerid="3" kpid="30192" status="suspended"
spid="89" sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:49.250"
lastbatchcompleted="2022-03-08T15:44:49.210" lastattention="1900-01-01T00:00:00.210" clientapp="Microsoft
SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic"
isolationlevel="read committed (2)" xactid="1011123" currentdb="8" currentdbname="AdventureWorksLT"
lockTimeout="4294967295" clientoption1="671096864" clientoption2="128056">
<executionStack>
<frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1"
stmtstart="2" stmtend="792"
sqlhandle="0x02000000c58b8f1e24e8f104a930776e21254b1771f92a520000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Red' </inputbuf>
</process>
<process id="process2476d07d088" taskpriority="0" logused="11360" waitresource="KEY: 8:72057594045267968
(39e18040972e)" waittime="2641" ownerId="1013536" transactionname="UPDATE" lasttranstarted="2022-03-
08T15:44:46.807" XDES="0x2475ca80428" lockMode="S" schedulerid="2" kpid="94040" status="suspended" spid="95"
sbid="0" ecid="0" priority="0" trancount="2" lastbatchstarted="2022-03-08T15:44:46.807"
lastbatchcompleted="2022-03-08T15:44:46.760" lastattention="1900-01-01T00:00:00.760" clientapp="Microsoft
SQL Server Management Studio - Query" hostname="LAPTOP-CHRISQ" hostpid="16716" loginname="chrisqpublic"
isolationlevel="read committed (2)" xactid="1013536" currentdb="8" currentdbname="AdventureWorksLT"
lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056">
<executionStack>
<frame procname="unknown" queryhash="0xef52b103e8b9b8ca" queryplanhash="0x02b0f58d7730f798" line="1"
stmtstart="2" stmtend="798"
sqlhandle="0x020000002c85bb06327c0852c0be840fc1e30efce2b7c8090000000000000000000000000000000000000000">
unknown </frame>
</executionStack>
<inputbuf>
UPDATE SalesLT.ProductDescription SET Description = Description
FROM SalesLT.ProductDescription as pd
JOIN SalesLT.ProductModelProductDescription as pmpd on
pd.ProductDescriptionID = pmpd.ProductDescriptionID
JOIN SalesLT.ProductModel as pm on
pmpd.ProductModelID = pm.ProductModelID
JOIN SalesLT.Product as p on
pm.ProductModelID=p.ProductModelID
WHERE p.Color = 'Silver'; </inputbuf>
</process>
</process-list>
<resource-list>
<keylock hobtid="72057594045202432" dbid="8" objectname="9e011567-2446-4213-9617-
bad2624ccc30.SalesLT.ProductDescription" indexname="PK_ProductDescription_ProductDescriptionID"
id="lock2474df12080" mode="U" associatedObjectId="72057594045202432">
<owner-list>
<owner id="process2476d07d088" mode="U" />
</owner-list>
<waiter-list>
<waiter id="process24756e75088" mode="U" requestType="wait" />
</waiter-list>
</keylock>
<keylock hobtid="72057594045267968" dbid="8" objectname="9e011567-2446-4213-9617-
bad2624ccc30.SalesLT.Product" indexname="PK_Product_ProductID" id="lock2474b588580" mode="X"
associatedObjectId="72057594045267968">
<owner-list>
<owner id="process24756e75088" mode="X" />
</owner-list>
<waiter-list>
<waiter id="process2476d07d088" mode="S" requestType="wait" />
</waiter-list>
</keylock>
</resource-list>
</deadlock>

To save the deadlock graph as an XML file:


1. Select File and Save As....
2. Leave the Save as type value as the default XML Files (*.xml)
3. Set the File name to the name of your choice.
4. Select Save .
Save a deadlock graph as an XDL file that can be displayed interactively in SSMS
Viewing an interactive representation of a deadlock graph can be useful to get a quick overview of the processes
and resources involved in a deadlock, and quickly identifying the deadlock victim.
To save a deadlock graph as a file that can be graphically displayed by SSMS:
1. Select the value in the deadlock_xml column from any row to open the deadlock graph's XML in a new
window in SSMS.
2. Select File and Save As....
3. Set Save as type to All Files .
4. Set the File name to the name of your choice, with the extension set to .xdl .
5. Select Save .

6. Close the file by selecting the X on the tab at the top of the window, or by selecting File , then Close .
7. Reopen the file in SSMS by selecting File , then Open , then File . Select the file you saved with the .xdl
extension.
The deadlock graph will now display in SSMS with a visual representation of the processes and resources
involved in the deadlock.

Analyze a deadlock for Azure SQL Database


A deadlock graph typically has three nodes:
Victim-list . The deadlock victim process identifier.
Process-list . Information on all the processes involved in the deadlock. Deadlock graphs use the term
'process' to represent a session running a transaction.
Resource-list . Information about the resources involved in the deadlock.
When analyzing a deadlock, it is useful to step through these nodes.
Deadlock victim list
The deadlock victim list shows the process that was chosen as the deadlock victim. In the visual representation
of a deadlock graph, processes are represented by ovals. The deadlock victim process has an "X" drawn over the
oval.

In the XML view of a deadlock graph, the victim-list node gives an ID for the process that was the victim of
the deadlock.
In our example deadlock, the victim process ID is process24756e75088 . We can use this ID when examining
the process-list and resource-list nodes to learn more about the victim process and the resources it was locking
or requesting to lock.
Deadlock process list
The deadlock process list is a rich source of information about the transactions involved in the deadlock.
The graphic representation of the deadlock graph shows only a subset of information contained in the deadlock
graph XML. The ovals in the deadlock graph represent the process, and show information including the:
Server process ID, also known as the session ID or SPID.
Deadlock priority of the session. If two sessions have different deadlock priorities, the session with the
lower priority is chosen as the deadlock victim. In this example, both sessions have the same deadlock
priority.
The amount of transaction log used by the session in bytes. If both sessions have the same deadlock
priority, the deadlock monitor chooses the session that is less expensive to roll back as the deadlock
victim. The cost is determined by comparing the number of log bytes written to that point in each
transaction.
In our example deadlock, session_id 89 had used a lower amount of transaction log, and was selected as
the deadlock victim.
Additionally, you can view the input buffer for the last statement run in each session prior to the deadlock by
hovering the mouse over each process. The input buffer will appear in a tooltip.
Additional information is available for processes in the XML view of the deadlock graph, including:
Identifying information for the session, such as the client name, host name, and login name.
The query plan hash for the last statement run by each session prior to the deadlock. The query plan hash is
useful for retrieving more information about the query from Query Store.
In our example deadlock:
We can see that both sessions were run using the SSMS client under the chrisqpublic login.
The query plan hash of the last statement run prior to the deadlock by our deadlock victim is
0x02b0f58d7730f798 . We can see the text of this statement in the input buffer.
The query plan hash of the last statement run by the other session in our deadlock is also
0x02b0f58d7730f798 . We can see the text of this statement in the input buffer. In this case, both queries
have the same query plan hash because the queries are identical, except for a literal value used as an equality
predicate.
We'll use these values later in this article to find additional information in Query Store.
Limitations of the input buffer in the deadlock process list
There are some limitations to be aware of regarding input buffer information in the deadlock process list.
Query text may be truncated in the input buffer. The input buffer is limited to the first 4,000 characters of the
statement being executed.
Additionally, some statements involved in the deadlock may not be included in the deadlock graph. In our
example, Session A ran two update statements within a single transaction. Only the second update statement,
the update that caused the deadlock, is included in the deadlock graph. The first update statement run by
Session A played a part in the deadlock by blocking Session B . The input buffer, query_hash , and related
information for the first statement run by Session A is not included in the deadlock graph.
To identify the full Transact-SQL run in a multi-statement transaction involved in a deadlock, you will need to
either find the relevant information in the stored procedure or application code that ran the query, or run a trace
using Extended Events to capture full statements run by sessions involved in a deadlock while it occurs. If a
statement involved in the deadlock has been truncated and only partial Transact-SQL appears in the input buffer,
you can find the Transact-SQL for the statement in Query Store with the Execution Plan.
Deadlock resource list
The deadlock resource list shows which lock resources are owned and waited on by the processes in the
deadlock.
Resources are represented by rectangles in the visual representation of the deadlock:

NOTE
You may notice that database names are represented as uniquedientifers in deadlock graphs for databases in Azure SQL
Database. This is the physical_database_name for the database listed in the sys.databases and
sys.dm_user_db_resource_governance dynamic management views.
In this example deadlock:
The deadlock victim, which we have referred to as Session A :
Owns an exclusive (X) lock on a key on the PK_Product_ProductID index on the SalesLT.Product table.
Requests an update (U) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.
The other process, which we have referred to as Session B :
Owns an update (U) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.
Requests a shared (S) lock on a key on the PK_ProductDescription_ProductDescriptionID index on the
SalesLT.ProductDescription table.

We can see the same information in the XML of the deadlock graph in the resource-list node.
Find query execution plans in Query Store
It is often useful to examine the query execution plans for statements involved in the deadlock. These execution
plans can often be found in Query Store using the query plan hash from the XML view of the deadlock graph's
process list.
This Transact-SQL query looks for query plans matching the query plan hash we found for our example
deadlock. Connect to the user database in Azure SQL Database to run the query.

DECLARE @query_plan_hash binary(8) = 0x02b0f58d7730f798

SELECT
qrsi.end_time as interval_end_time,
qs.query_id,
qp.plan_id,
qt.query_sql_text,
TRY_CAST(qp.query_plan as XML) as query_plan,
qrs.count_executions
FROM sys.query_store_query as qs
JOIN sys.query_store_query_text as qt on qs.query_text_id=qt.query_text_id
JOIN sys.query_store_plan as qp on qs.query_id=qp.query_id
JOIN sys.query_store_runtime_stats qrs on qp.plan_id = qrs.plan_id
JOIN sys.query_store_runtime_stats_interval qrsi on
qrs.runtime_stats_interval_id=qrsi.runtime_stats_interval_id
WHERE query_plan_hash = @query_plan_hash
ORDER BY interval_end_time, query_id;
GO

You may not be able to obtain a query execution plan from Query Store, depending on your Query Store
CLEANUP_POLICY or QUERY_CAPTURE_MODE settings. In this case, you can often get needed information by
displaying the estimated execution plan for the query.
Look for patterns that increase blocking
When examining query execution plans involved in deadlocks, look out for patterns that may contribute to
blocking and deadlocks.
Table or index scans . When queries modifying data are run under RCSI, the selection of rows to update
is done using a blocking scan where an update (U) lock is taken on the data row as data values are read. If
the data row does not meet the update criteria, the update lock is released and the next row is locked and
scanned.
Tuning indexes to help modification queries find rows more efficiently reduces the number of update
locks issued. This reduces the chances of blocking and deadlocks.
Indexed views referencing more than one table . When you modify a table that is referenced in an
indexed view, the database engine must also maintain the indexed view. This requires taking out more
locks and can lead to increased blocking and deadlocks. Indexed views may also cause update operations
to internally execute under the read committed isolation level.
Modifications to columns referenced in foreign key constraints . When you modify columns in a
table that are referenced in a FOREIGN KEY constraint, the database engine must look for related rows in
the referencing table. Row versions cannot be used for these reads. In cases where cascading updates or
deletes are enabled, the isolation level may be escalated to serializable for the duration of the statement
to protect against phantom inserts.
Lock hints . Look for table hints that specify isolation levels requiring more locks. These hints include
HOLDLOCK (which is equivalent to serializable), SERIALIZABLE , READCOMMITTEDLOCK (which disables RCSI),
and REPEATABLEREAD . Additionally, hints such as PAGLOCK , TABLOCK , UPDLOCK , and XLOCK can increase the
risks of blocking and deadlocks.
If these hints are in place, research why the hints were implemented. These hints may prevent race
conditions and ensure data validity. It may be possible to leave these hints in place and prevent future
deadlocks using an alternate method in the Prevent a deadlock from reoccurring section of this article if
necessary.

NOTE
Learn more about behavior when modifying data using row versioning in the Transaction locking and row
versioning guide.

When examining the full code for a transaction, either in an execution plan or in application query code, look for
additional problematic patterns:
User interaction in transactions . User interaction inside an explicit multi-statement transaction
significantly increases the duration of transactions. This makes it more likely for these transactions to
overlap and for blocking and deadlocks to occur.
Similarly, holding an open transaction and querying an unrelated database or system mid-transaction
significantly increases the chances of blocking and deadlocks.
Transactions accessing objects in different orders . Deadlocks are less likely to occur when
concurrent explicit multi-statement transactions follow the same patterns and access objects in the same
order.

Prevent a deadlock from reoccurring


There are multiple techniques available to prevent deadlocks from reoccurring, including index tuning, forcing
plans with Query Store, and modifying Transact-SQL queries.
Review the table's clustered index . Most tables benefit from clustered indexes, but often, tables are
implemented as heaps by accident.
One way to check for a clustered index is by using the sp_helpindex system stored procedure. For
example, we can view a summary of the indexes on the SalesLT.Product table by executing the following
statement:

exec sp_helpindex 'SalesLT.Product';


GO
Review the index_description column. A table can have only one clustered index. If a clustered index has
been implemented for the table, the index_description will contain the word 'clustered'.
If no clustered index is present, the table is a heap. In this case, review if the table was intentionally
created as a heap to solve a specific performance problem. Consider implementing a clustered index
based on the clustered index design guidelines.
In some cases, creating or tuning a clustered index may reduce or eliminate blocking in deadlocks. In
other cases, you may need to employ an additional technique such as the others in this list.
Create or modify nonclustered indexes. Tuning nonclustered indexes can help your modification
queries find the data to update more quickly, which reduces the number of update locks required.
In our example deadlock, the query execution plan found in Query Store contains a clustered index scan
against the PK_Product_ProductID index. The deadlock graph indicates that a shared (S) lock wait on this
index is a component in the deadlock.

This index scan is being performed because our update query needs to modify an indexed view named
vProductAndDescription . As mentioned in the Look for patterns that increase blocking section of this
article, indexed views referencing multiple tables may increase blocking and the likelihood of deadlocks.
If we create the following nonclustered index in the AdventureWorksLT database that "covers" the columns
from SalesLT.Product referenced by the indexed view, this helps the query find rows much more
efficiently:

CREATE INDEX ix_Product_ProductID_Name_ProductModelID on SalesLT.Product (ProductID, Name,


ProductModelID);
GO

After creating this index, the deadlock no longer reoccurs.


When deadlocks involve modifications to columns referenced in foreign key constraints, ensure that
indexes on the referencing table of the FOREIGN KEY support efficiently finding related rows.
While indexes can dramatically improve query performance in some cases, indexes also have overhead
and management costs. Review general index design guidelines to help assess the benefit of indexes
before creating indexes, especially wide indexes and indexes on large tables.
Assess the value of indexed views . Another option to prevent our example deadlock from
reoccurring is to drop the SalesLT.vProductAndDescription indexed view. If that indexed view is not being
used, this will reduce the overhead of maintaining the indexed view over time.
Use Snapshot isolation . In some cases, setting the transaction isolation level to snapshot for one or
more of the transactions involved in a deadlock may prevent blocking and deadlocks from reoccurring.
This technique is most likely to be successful when used on SELECT statements when read committed
snapshot is disabled in a database. When read committed snapshot is disabled, SELECT queries using the
read committed isolation level require shared (S) locks. Using snapshot isolation on these transactions
removes the need for shared locks, which can prevent blocking and deadlocks.
In databases where read committed snapshot isolation has been enabled, SELECT queries do not require
shared (S) locks, so deadlocks are more likely to occur between transactions that are modifying data. In
cases where deadlocks occur between multiple transactions modifying data, snapshot isolation may
result in an update conflict instead of a deadlock. This similarly requires one of the transactions to retry
its operation.
Force a plan with Quer y Store . You may find that one of the queries in the deadlock has multiple
execution plans, and the deadlock only occurs when a specific plan is used. You can prevent the deadlock
from reoccurring by forcing a plan in Query Store.
Modify the Transact-SQL . You may need to modify Transact-SQL to prevent the deadlock from
reoccurring. Modifying Transact-SQL should be done carefully and changes should be rigorously tested to
ensure that data is correct when modifications run concurrently. When rewriting Transact-SQL, consider:
Ordering statements in transactions so that they access objects in the same order.
Breaking apart transactions into smaller transactions when possible.
Using query hints, if necessary, to optimize performance. You can apply hints without changing
application code using Query Store.
Find more ways to minimize deadlocks in the Transaction locking and row versioning guide.

NOTE
In some cases, you may wish to adjust the deadlock priority of one or more sessions involved in a deadlock if it is
important for one of the sessions to complete successfully without retrying, or when one of the queries involved in the
deadlock is not critical and should be always chosen as the victim. While this does not prevent the deadlock from
reoccurring, it may reduce the impact of future deadlocks.

Drop an XEvents session


You may wish to leave an XEvents session collecting deadlock information running on critical databases for long
periods. Be aware that if you use an event file target, this may result in large files if multiple deadlocks occur. You
may delete blob files from Azure Storage for an active trace, except for the file that is currently being written to.
When you wish to remove an XEvents session, the Transact-SQL drop the session is the same, regardless of the
target type selected.
To remove an XEvents session, run the following Transact-SQL. Before running the code, replace the name of the
session with the appropriate value.
ALTER EVENT SESSION [deadlocks] ON DATABASE
STATE = STOP;
GO

DROP EVENT SESSION [deadlocks] ON DATABASE;


GO

Use Azure Storage Explorer


[Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer is a standalone
application that simplifies working with event file targets stored in blobs in Azure Storage. You can use Storage
Explorer to:
Create a blob container to hold XEvent session data.
Get the shared access signature (SAS) for a blob container.
As mentioned in Collect deadlock graphs in Azure SQL Database with Extended Events, the read, write,
and list permissions are required.
Remove any leading ? character from the Query string to use the value as the secret when creating
a database scoped credential.
View and download extended event files from a blob container.
Download Azure Storage Explorer..

Next steps
Learn more about performance in Azure SQL Database:
Understand and resolve Azure SQL Database blocking problems
Transaction Locking and Row Versioning Guide
SET TRANSACTION ISOLATION LEVEL
Azure SQL Database: Improving Performance Tuning with Automatic Tuning
Deliver consistent performance with Azure SQL
Retry logic for transient errors.
Configure the max degree of parallelism (MAXDOP)
in Azure SQL Database
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes the max degree of parallelism (MAXDOP) configuration setting in Azure SQL
Database.

NOTE
This content is focused on Azure SQL Database. Azure SQL Database is based on the latest stable version of the
Microsoft SQL Server database engine, so much of the content is similar though troubleshooting and configuration
options differ. For more on MAXDOP in SQL Server, see Configure the max degree of parallelism Server Configuration
Option.

Overview
MAXDOP controls intra-query parallelism in the database engine. Higher MAXDOP values generally result in
more parallel threads per query, and faster query execution.
In Azure SQL Database, the default MAXDOP setting for each new single database and elastic pool database is 8.
This default prevents unnecessary resource utilization, while still allowing the database engine to execute
queries faster using parallel threads. It is not typically necessary to further configure MAXDOP in Azure SQL
Database workloads, though it may provide benefits as an advanced performance tuning exercise.

NOTE
In September 2020, based on years of telemetry in the Azure SQL Database service MAXDOP 8 was made the default for
new databases, as the optimal value for the widest variety of customer workloads. This default helped prevent
performance problems due to excessive parallelism. Prior to that, the default setting for new databases was MAXDOP 0.
MAXDOP was not automatically changed for existing databases created prior to September 2020.

In general, if the database engine chooses to execute a query using parallelism, execution time is faster. However,
excess parallelism can consume additional processor resources without improving query performance. At scale,
excess parallelism can negatively affect query performance for all queries executing on the same database
engine instance. Traditionally, setting an upper bound for parallelism has been a common performance tuning
exercise in SQL Server workloads.
The following table describes database engine behavior when executing queries with different MAXDOP values:

M A XDO P B EH AVIO R

=1 The database engine uses a single serial thread to execute


queries. Parallel threads are not used.
M A XDO P B EH AVIO R

>1 The database engine sets the number of additional


schedulers to be used by parallel threads to the MAXDOP
value, or the total number of logical processors, whichever is
smaller.

=0 The database engine sets the number of additional


schedulers to be used by parallel threads to the total
number of logical processors or 64, whichever is smaller.

NOTE
Each query executes with at least one scheduler, and one worker thread on that scheduler.
A query executing with parallelism uses additional schedulers, and additional parallel threads. Because multiple parallel
threads may execute on the same scheduler, the total number of threads used to execute a query may be higher than
specified MAXDOP value or the total number of logical processors. For more information, see Scheduling parallel tasks.

Considerations
In Azure SQL Database, you can change the default MAXDOP value:
At the query level, using the MAXDOP query hint.
At the database level, using the MAXDOP database scoped configuration.
Long-standing SQL Server MAXDOP considerations and recommendations are applicable to Azure SQL
Database.
Index operations that create or rebuild an index, or that drop a clustered index, can be resource intensive.
You can override the database MAXDOP value for index operations by specifying the MAXDOP index
option in the CREATE INDEX or ALTER INDEX statement. The MAXDOP value is applied to the statement at
execution time and is not stored in the index metadata. For more information, see Configure Parallel
Index Operations.
In addition to queries and index operations, the database scoped configuration option for MAXDOP also
controls parallelism of other statements that may use parallel execution, such as DBCC CHECKTABLE,
DBCC CHECKDB, and DBCC CHECKFILEGROUP.

Recommendations
Changing MAXDOP for the database can have major impact on query performance and resource utilization,
both positive and negative. However, there is no single MAXDOP value that is optimal for all workloads. The
recommendations for setting MAXDOP are nuanced, and depend on many factors.
Some peak concurrent workloads may operate better with a different MAXDOP than others. A properly
configured MAXDOP should reduce the risk of performance and availability incidents, and in some cases may
reduce costs by being able to avoid unnecessary resource utilization, and thus scale down to a lower service
objective.
Excessive parallelism
A higher MAXDOP often reduces duration for CPU-intensive queries. However, excessive parallelism can worsen
other concurrent workload performance by starving other queries of CPU and worker thread resources. In
extreme cases, excessive parallelism can consume all database or elastic pool resources, causing query timeouts,
errors, and application outages.
TIP
We recommend that customers avoid setting MAXDOP to 0 even if it does not appear to cause problems currently.

Excessive parallelism becomes most problematic when there are more concurrent requests than can be
supported by the CPU and worker thread resources provided by the service objective. Avoid MAXDOP 0 to
reduce the risk of potential future problems due to excessive parallelism if a database is scaled up, or if future
hardware configurations in Azure SQL Database provide more cores for the same database service objective.
Modifying MAXDOP
If you determine that a MAXDOP setting different from the default is optimal for your Azure SQL Database
workload, you can use the ALTER DATABASE SCOPED CONFIGURATION T-SQL statement. For examples, see the
Examples using Transact-SQL section below. To change MAXDOP to a non-default value for each new database
you create, add this step to your database deployment process.
If non-default MAXDOP benefits only a small subset of queries in the workload, you can override MAXDOP at
the query level by adding the OPTION (MAXDOP) hint. For examples, see the Examples using Transact-SQL
section below.
Thoroughly test your MAXDOP configuration changes with load testing involving realistic concurrent query
loads.
MAXDOP for the primary and secondary replicas can be configured independently if different MAXDOP settings
are optimal for your read-write and read-only workloads. This applies to Azure SQL Database read scale-out,
geo-replication, and Hyperscale secondary replicas. By default, all secondary replicas inherit the MAXDOP
configuration of the primary replica.

Security
Permissions
The ALTER DATABASE SCOPED CONFIGURATION statement must be executed as the server admin, as a member of the
database role db_owner , or a user that has been granted the ALTER ANY DATABASE SCOPED CONFIGURATION
permission.

Examples
These examples use the latest AdventureWorksLT sample database when the SAMPLE option is chosen for a
new single database of Azure SQL Database.
PowerShell
MAXDOP database scoped configuration
This example shows how to use ALTER DATABASE SCOPED CONFIGURATION statement to set the MAXDOP
configuration to 2 . The setting takes effect immediately for new queries. The PowerShell cmdlet Invoke-
SqlCmd executes the T-SQL queries to set and the return the MAXDOP database scoped configuration.
$dbName = "sample"
$serverName = <server name here>
$serveradminLogin = <login here>
$serveradminPassword = <password here>
$desiredMAXDOP = 8

$params = @{
'database' = $dbName
'serverInstance' = $serverName
'username' = $serveradminLogin
'password' = $serveradminPassword
'outputSqlErrors' = $true
'query' = 'ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = ' + $desiredMAXDOP + ';
SELECT [value] FROM sys.database_scoped_configurations WHERE [name] = ''MAXDOP'';'
}
Invoke-SqlCmd @params

This example is for use with Azure SQL Databases with read scale-out replicas enabled, geo-replication, and
Azure SQL Database Hyperscale secondary replicas. As an example, the primary replica is set to a different
default MAXDOP as the secondary replica, anticipating that there may be differences between a read-write and a
read-only workload.

$dbName = "sample"
$serverName = <server name here>
$serveradminLogin = <login here>
$serveradminPassword = <password here>
$desiredMAXDOP_primary = 8
$desiredMAXDOP_secondary_readonly = 1

$params = @{
'database' = $dbName
'serverInstance' = $serverName
'username' = $serveradminLogin
'password' = $serveradminPassword
'outputSqlErrors' = $true
'query' = 'ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = ' + $desiredMAXDOP_primary + ';
ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP = ' + $desiredMAXDOP_secondary_readonly +
';
SELECT [value], value_for_secondary FROM sys.database_scoped_configurations WHERE [name] = ''MAXDOP'';'
}
Invoke-SqlCmd @params

Transact-SQL
You can use the Azure portal query editor, SQL Server Management Studio (SSMS), or Azure Data Studio to
execute T-SQL queries against your Azure SQL Database.
1. Open a new query window.
2. Connect to the database where you want to change MAXDOP. You cannot change database scoped
configurations in the master database.
3. Copy and paste the following example into the query window and select Execute .
MAXDOP database scoped configuration
This example shows how to determine the current database MAXDOP database scoped configuration using the
sys.database_scoped_configurations system catalog view.

SELECT [value] FROM sys.database_scoped_configurations WHERE [name] = 'MAXDOP';

This example shows how to use ALTER DATABASE SCOPED CONFIGURATION statement to set the MAXDOP
configuration to 8 . The setting takes effect immediately.

ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 8;

This example is for use with Azure SQL Databases with read scale-out replicas enabled, geo-replication, and
Hyperscale secondary replicas. As an example, the primary replica is set to a different MAXDOP than the
secondary replica, anticipating that there may be differences between the read-write and read-only workloads.
All statements are executed on the primary replica. The value_for_secondary column of the
sys.database_scoped_configurations contains settings for the secondary replica.

ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 8;


ALTER DATABASE SCOPED CONFIGURATION FOR SECONDARY SET MAXDOP = 1;
SELECT [value], value_for_secondary FROM sys.database_scoped_configurations WHERE [name] = 'MAXDOP';

MAXDOP query hint


This example shows how to execute a query using the query hint to force the max degree of parallelism to 2 .

SELECT ProductID, OrderQty, SUM(LineTotal) AS Total


FROM SalesLT.SalesOrderDetail
WHERE UnitPrice < 5
GROUP BY ProductID, OrderQty
ORDER BY ProductID, OrderQty
OPTION (MAXDOP 2);
GO

MAXDOP index option


This example shows how to rebuild an index using the index option to force the max degree of parallelism to
12 .

ALTER INDEX ALL ON SalesLT.SalesOrderDetail


REBUILD WITH
( MAXDOP = 12
, SORT_IN_TEMPDB = ON
, ONLINE = ON);

See also
ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)
sys.database_scoped_configurations (Transact-SQL)
Configure Parallel Index Operations
Query Hints (Transact-SQL)
Set Index Options
Understand and resolve Azure SQL Database blocking problems

Next steps
Monitor and Tune for Performance
SQL Server database migration to Azure SQL
Database
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this article, you learn about the primary methods for migrating a SQL Server 2005 or later database to Azure
SQL Database. For information on migrating to Azure SQL Managed Instance, see Migrate a SQL Server
instance to Azure SQL Managed Instance. For guidance on choosing migration options and tools to migrate to
Azure SQL, see Migrate to Azure SQL

Migrate to a single database or a pooled database


There are two primary methods for migrating a SQL Server 2005 or later database to Azure SQL Database. The
first method is simpler but requires some, possibly substantial, downtime during the migration. The second
method is more complex, but substantially eliminates downtime during the migration.
In both cases, you need to ensure that the source database is compatible with Azure SQL Database using the
Data Migration Assistant (DMA). SQL Database is approaching feature parity with SQL Server, other than issues
related to server-level and cross-database operations. Databases and applications that rely on partially
supported or unsupported functions need some re-engineering to fix these incompatibilities before the SQL
Server database can be migrated.

NOTE
To migrate a non-SQL Server database, including Microsoft Access, Sybase, MySQL Oracle, and DB2 to Azure SQL
Database, see SQL Server Migration Assistant.

Method 1: Migration with downtime during the migration


Use this method to migrate to a single or a pooled database if you can afford some downtime or you're
performing a test migration of a production database for later migration. For a tutorial, see Migrate a SQL
Server database.
The following list contains the general workflow for a SQL Server database migration of a single or a pooled
database using this method. For migration to SQL Managed Instance, see SQL Server to Azure SQL Managed
Instance Guide.
1. Assess the database for compatibility by using the latest version of the Data Migration Assistant (DMA).
2. Prepare any necessary fixes as Transact-SQL scripts.
3. Make a transactionally consistent copy of the source database being migrated or halt new transactions from
occurring in the source database while migration is occurring. Methods to accomplish this latter option
include disabling client connectivity or creating a database snapshot. After migration, you may be able to use
transactional replication to update the migrated databases with changes that occur after the cutoff point for
the migration. See Migrate using Transactional Migration.
4. Deploy the Transact-SQL scripts to apply the fixes to the database copy.
5. Migrate the database copy to a new database in Azure SQL Database by using the Data Migration Assistant.

NOTE
Rather than using DMA, you can also use a BACPAC file. See Import a BACPAC file to a new database in Azure SQL
Database.

Optimizing data transfer performance during migration


The following list contains recommendations for best performance during the import process.
Choose the highest service tier and compute size that your budget allows to maximize the transfer
performance. You can scale down after the migration completes to save money.
Minimize the distance between your BACPAC file and the destination data center.
Disable autostatistics during migration
Partition tables and indexes
Drop indexed views, and recreate them once finished
Remove rarely queried historical data to another database and migrate this historical data to a separate
database in Azure SQL Database. You can then query this historical data using elastic queries.
Optimize performance after the migration completes
Update statistics with full scan after the migration is completed.

Method 2: Use Transactional Replication


When you can't afford to remove your SQL Server database from production while the migration is occurring,
you can use SQL Server transactional replication as your migration solution. To use this method, the source
database must meet the requirements for transactional replication and be compatible for Azure SQL Database.
For information about SQL replication with Always On, see Configure Replication for Always On Availability
Groups (SQL Server).
To use this solution, you configure your database in Azure SQL Database as a subscriber to the SQL Server
instance that you wish to migrate. The transactional replication distributor synchronizes data from the database
to be synchronized (the publisher) while new transactions continue occur.
With transactional replication, all changes to your data or schema show up in your database in Azure SQL
Database. Once the synchronization is complete and you're ready to migrate, change the connection string of
your applications to point them to your database. Once transactional replication drains any changes left on your
source database and all your applications point to Azure DB, you can uninstall transactional replication. Your
database in Azure SQL Database is now your production system.

TIP
You can also use transactional replication to migrate a subset of your source database. The publication that you replicate
to Azure SQL Database can be limited to a subset of the tables in the database being replicated. For each table being
replicated, you can limit the data to a subset of the rows and/or a subset of the columns.

Migration to SQL Database using Transaction Replication workflow


IMPORTANT
Use the latest version of SQL Server Management Studio to remain synchronized with updates to Azure and SQL
Database. Older versions of SQL Server Management Studio cannot set up SQL Database as a subscriber. Update SQL
Server Management Studio.

1. Set up Distribution
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
2. Create Publication
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
3. Create Subscription
Using SQL Server Management Studio (SSMS)
Using Transact-SQL
Some tips and differences for migrating to SQL Database
Use a local distributor
Doing so causes a performance impact on the server.
If the performance impact is unacceptable, you can use another server but it adds complexity in
management and administration.
When selecting a snapshot folder, make sure the folder you select is large enough to hold a BCP of every
table you want to replicate.
Snapshot creation locks the associated tables until it's complete, so schedule your snapshot appropriately.
Only push subscriptions are supported in Azure SQL Database. You can only add subscribers from the source
database.

Resolving database migration compatibility issues


There are a wide variety of compatibility issues that you might encounter, depending both on the version of SQL
Server in the source database and the complexity of the database you're migrating. Older versions of SQL
Server have more compatibility issues. Use the following resources, in addition to a targeted Internet search
using your search engine of choices:
SQL Server database features not supported in Azure SQL Database
Discontinued Database Engine Functionality in SQL Server 2016
Discontinued Database Engine Functionality in SQL Server 2014
Discontinued Database Engine Functionality in SQL Server 2012
Discontinued Database Engine Functionality in SQL Server 2008 R2
Discontinued Database Engine Functionality in SQL Server 2005
In addition to searching the Internet and using these resources, use the Microsoft Q&A question page for Azure
SQL Database or StackOverflow.

IMPORTANT
Azure SQL Managed Instance enables you to migrate an existing SQL Server instance and its databases with minimal to
no compatibility issues. See What is a managed instance.
Next steps
Use the script on the Azure SQL EMEA Engineers blog to Monitor tempdb usage during migration.
Use the script on the Azure SQL EMEA Engineers blog to Monitor the transaction log space of your database
while migration is occurring.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For information about working with UTC time after migration, see Modifying the default time zone for your
local time zone.
For information about changing the default language of a database after migration, see How to change the
default language of Azure SQL Database.
New DBA in the cloud – Managing Azure SQL
Database after migration
7/12/2022 • 28 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Moving from the traditional self-managed, self-controlled environment to a PaaS environment can seem a bit
overwhelming at first. As an app developer or a DBA, you would want to know the core capabilities of the
platform that would help you keep your application available, performant, secure and resilient - always. This
article aims to do exactly that. The article succinctly organizes resources and gives you some guidance on how
to best use the key capabilities of Azure SQL Database with single and pooled databases to manage and keep
your application running efficiently and achieve optimal results in the cloud. Typical audience for this article
would be those who:
Are evaluating migration of their application(s) to Azure SQL Database – Modernizing your application(s).
Are In the process of migrating their application(s) – On-going migration scenario.
Have recently completed the migration to Azure SQL Database – New DBA in the cloud.
This article discusses some of the core characteristics of Azure SQL Database as a platform that you can readily
leverage when working with single databases and pooled databases in elastic pools. They are the following:
Monitor databases using the Azure portal
Business continuity and disaster recovery (BCDR)
Security and compliance
Intelligent database monitoring and maintenance
Data movement

Monitor databases using the Azure portal


In the Azure portal, you can monitor an individual database's utilization by selecting your database and clicking
the Monitoring chart. This brings up a Metric window that you can change by clicking the Edit char t button.
Add the following metrics:
CPU percentage
DTU percentage
Data IO percentage
Database size percentage
Once you've added these metrics, you can continue to view them in the Monitoring chart with more
information on the Metric window. All four metrics show the average utilization percentage relative to the DTU
of your database. See the DTU-based purchasing model and vCore-based purchasing model articles for more
information about service tiers.
You can also configure alerts on the performance metrics. Click the Add aler t button in the Metric window.
Follow the wizard to configure your alert. You have the option to alert if the metrics exceed a certain threshold
or if the metric falls below a certain threshold.
For example, if you expect the workload on your database to grow, you can choose to configure an email alert
whenever your database reaches 80% on any of the performance metrics. You can use this as an early warning
to figure out when you might have to switch to the next highest compute size.
The performance metrics can also help you determine if you are able to downgrade to a lower compute size.
Assume you are using a Standard S2 database and all performance metrics show that the database on average
does not use more than 10% at any given time. It is likely that the database will work well in Standard S1.
However, be aware of workloads that spike or fluctuate before making the decision to move to a lower compute
size.

Business continuity and disaster recovery (BCDR)


Business continuity and disaster recovery abilities enable you to continue your business, as usual, in case of a
disaster. The disaster could be a database level event (for example, someone mistakenly drops a crucial table) or
a data-center level event (regional catastrophe, for example a tsunami).
How do I create and manage backups on SQL Database
You don’t create backups on Azure SQL Database and that is because you don’t have to. SQL Database
automatically backs up databases for you, so you no longer must worry about scheduling, taking and managing
backups. The platform takes a full backup every week, differential backup every few hours and a log backup
every 5 minutes to ensure the disaster recovery is efficient, and the data loss minimal. The first full backup
happens as soon as you create a database. These backups are available to you for a certain period called the
“Retention Period” and varies according to the service tier you choose. SQL Database provides you the ability to
restore to any point in time within this retention period using Point in Time Recovery (PITR).

SERVIC E T IER RET EN T IO N P ERIO D IN DAY S

Basic 7
SERVIC E T IER RET EN T IO N P ERIO D IN DAY S

Standard 35

Premium 35

In addition, the Long-Term Retention (LTR) feature allows you to hold onto your backup files for a much longer
period specifically, for up to 10 years, and restore data from these backups at any point within that period.
Furthermore, the database backups are kept in geo-replicated storage to ensure resilience from regional
catastrophe. You can also restore these backups in any Azure region at any point of time within the retention
period. See Business continuity overview.
How do I ensure business continuity in the event of a datacenter-level disaster or regional catastrophe
Because your database backups are stored in geo-replicated storage to ensure that in case of a regional disaster,
you can restore the backup to another Azure region. This is called geo-restore. The RPO (Recovery Point
Objective) for this is generally < 1 hour and the ERT (Estimated Recovery Time – few minutes to hours).
For mission-critical databases, Azure SQL Database offers, active geo-replication. What this essentially does is
that it creates a geo-replicated secondary copy of your original database in another region. For example, if your
database is initially hosted in Azure West US region and you want regional disaster resilience. You’d create an
active geo replica of the database in West US to say East US. When the calamity strikes on West US, you can fail
over to the East US region. Configuring them in an auto-failover Group is even better because this ensures that
the database automatically fails over to the secondary in East US in case of a disaster. The RPO for this is < 5
seconds and the ERT < 30 seconds.
If an auto-failover group is not configured, then your application needs to actively monitor for a disaster and
initiate a failover to the secondary. You can create up to 4 such active geo-replicas in different Azure regions. It
gets even better. You can also access these secondary active geo-replicas for read-only access. This comes in
very handy to reduce latency for a geo-distributed application scenario.
How does my disaster recovery plan change from on-premises to SQL Database
In summary, SQL Server setup requires you to actively manage your Availability by using features such as
Failover Clustering, Database Mirroring, Transaction Replication, or Log Shipping and maintain and manage
backups to ensure Business Continuity. With SQL Database, the platform manages these for you, so you can
focus on developing and optimizing your database application and not worry about disaster management as
much. You can have backup and disaster recovery plans configured and working with just a few clicks on the
Azure portal (or a few commands using the PowerShell APIs).
To learn more about Disaster recovery, see: Azure SQL Database Disaster Recovery 101

Security and compliance


SQL Database takes Security and Privacy very seriously. Security within SQL Database is available at the
database level and at the platform level and is best understood when categorized into several layers. At each
layer you get to control and provide optimal security for your application. The layers are:
Identity & authentication (SQL authentication and Azure Active Directory [Azure AD] authentication).
Monitoring activity (Auditing and threat detection).
Protecting actual data (Transparent Data Encryption [TDE] and Always Encrypted [AE]).
Controlling Access to sensitive and privileged data (Row Level security and Dynamic Data Masking).
Microsoft Defender for Cloud offers centralized security management across workloads running in Azure, on-
premises, and in other clouds. You can view whether essential SQL Database protection such as Auditing and
Transparent Data Encryption [TDE] are configured on all resources, and create policies based on your own
requirements.
What user authentication methods are offered in SQL Database
There are two authentication methods offered in SQL Database:
Azure Active Directory Authentication
SQL authentication
Traditional Windows authentication is not supported. Azure Active Directory (Azure AD) is a centralized identity
and access management service. With this you can very conveniently provide single sign-on (SSO) access to the
personnel in your organization. What this means is that the credentials are shared across Azure services for
simpler authentication.
Azure AD supports Azure AD Multi-Factor Authentication and with a few clicks Azure AD can be integrated with
Windows Server Active Directory. SQL Authentication works exactly like you’ve been using it in the past. You
provide a username/password and you can authenticate users to any database on a given server. This also
allows SQL Database and Azure Synapse Analytics to offer Multi-Factor Authentication and guest user accounts
within an Azure AD domain. If you already have an Active Directory on-premises, you can federate the directory
with Azure Active Directory to extend your directory to Azure.

IF Y O U. . . SQ L DATA B A SE / A Z URE SY N A P SE A N A LY T IC S

Prefer not to use Azure Active Directory (Azure AD) in Azure Use SQL authentication

Used AD on SQL Server on-premises Federate AD with Azure AD, and use Azure AD
authentication. With this, you can use Single Sign-On.

Need to enforce Multi-Factor Authentication Require Multi-Factor Authentication as a policy through


Microsoft Conditional Access, and use Azure AD Universal
authentication with Multi-Factor Authentication support.

Have guest accounts from Microsoft accounts (live.com, Use Azure AD Universal authentication in SQL
outlook.com) or other domains (gmail.com) Database/Data Warehouse, which leverages Azure AD B2B
Collaboration.

Are logged in to Windows using your Azure AD credentials Use Azure AD integrated authentication.
from a federated domain

Are logged in to Windows using credentials from a domain Use Azure AD integrated authentication.
not federated with Azure

Have middle-tier services which need to connect to SQL Use Azure AD integrated authentication.
Database or Azure Synapse Analytics

How do I limit or control connectivity access to my database


There are multiple techniques at your disposal that you could use to attain optimal connectivity organization for
your application.
Firewall Rules
VNet Service Endpoints
Reserved IPs
Firewall
A firewall prevents access to your server from an external entity by allowing only specific entities access to your
server. By default, all connections to databases inside the server are disallowed, except (optionally7) connections
coming in from other Azure Services. With a firewall rule you can open access to your server only to entities (for
example, a developer machine) that you approve of, by allowing that computer’s IP address through the firewall.
It also allows you to specify a range of IPs that you would want to allow access to the server. For example,
developer machine IP addresses in your organization can be added at once by specifying a range in the Firewall
settings page.
You can create firewall rules at the server level or at the database level. Server level IP firewall rules can either be
created using the Azure portal or with SSMS. For learning more about how to set a server-level and database-
level firewall rule, see: Create IP firewall rules in SQL Database.
Service endpoints
By default, your database is configured to “Allow Azure services to access server” – which means any Virtual
Machine in Azure may attempt to connect to your database. These attempts still do have to get authenticated.
However, if you would not like your database to be accessible by any Azure IPs, you can disable “Allow Azure
services to access server”. Additionally, you can configure VNet Service Endpoints.
Service endpoints (SE) allow you to expose your critical Azure resources only to your own private virtual
network in Azure. By doing so, you essentially eliminate public access to your resources. The traffic between
your virtual network to Azure stays on the Azure backbone network. Without SE you get forced-tunneling
packet routing. Your virtual network forces the internet traffic to your organization and the Azure Service traffic
to go over the same route. With Service Endpoints, you can optimize this since the packets flow straight from
your virtual network to the service on Azure backbone network.

Reserved IPs
Another option is to provision reserved IPs for your VMs, and add those specific VM IP addresses in the server
firewall settings. By assigning reserved IPs, you save the trouble of having to update the firewall rules with
changing IP addresses.
What port do I connect to SQL Database on
Port 1433. SQL Database communicates over this port. To connect from within a corporate network, you have to
add an outbound rule in the firewall settings of your organization. As a guideline, avoid exposing port 1433
outside the Azure boundary.
How can I monitor and regulate activity on my server and database in SQL Database
SQL Database Auditing
With SQL Database, you can turn ON Auditing to track database events. SQL Database Auditing records
database events and writes them into an audit log file in your Azure Storage Account. Auditing is especially
useful if you intend to gain insight into potential security and policy violations, maintain regulatory compliance
etc. It allows you to define and configure certain categories of events that you think need auditing and based on
that you can get preconfigured reports and a dashboard to get an overview of events occurring on your
database. You can apply these auditing policies either at the database level or at the server level. A guide on how
to turn on auditing for your server/database, see: Enable SQL Database Auditing.
Threat detection
With threat detection, you get the ability to act upon security or policy violations discovered by Auditing very
easily. You don’t need to be a security expert to address potential threats or violations in your system. Threat
detection also has some built-in capabilities like SQL Injection detection. SQL Injection is an attempt to alter or
compromise the data and a quite common way of attacking a database application in general. Threat detection
runs multiple sets of algorithms which detect potential vulnerabilities and SQL injection attacks, as well as
anomalous database access patterns (such as access from an unusual location or by an unfamiliar principal).
Security officers or other designated administrators receive an email notification if a threat is detected on the
database. Each notification provides details of the suspicious activity and recommendations on how to further
investigate and mitigate the threat. To learn how to turn on Threat detection, see: Enable threat detection.
How do I protect my data in general on SQL Database
Encryption provides a strong mechanism to protect and secure your sensitive data from intruders. Your
encrypted data is of no use to the intruder without the decryption key. Thus, it adds an extra layer of protection
on top of the existing layers of security built in SQL Database. There are two aspects to protecting your data in
SQL Database:
Your data that is at-rest in the data and log files
Your data that is in-flight
In SQL Database, by default, your data at rest in the data and log files on the storage subsystem is completely
and always encrypted via Transparent Data Encryption [TDE]. Your backups are also encrypted. With TDE there
are no changes required on your application side that is accessing this data. The encryption and decryption
happen transparently; hence the name. For protecting your sensitive data in-flight and at rest, SQL Database
provides a feature called Always Encrypted (AE). AE is a form of client-side encryption which encrypts sensitive
columns in your database (so they are in ciphertext to database administrators and unauthorized users). The
server receives the encrypted data to begin with. The key for Always Encrypted is also stored on the client side,
so only authorized clients can decrypt the sensitive columns. The server and data administrators cannot see the
sensitive data since the encryption keys are stored on the client. AE encrypts sensitive columns in the table end
to end, from unauthorized clients to the physical disk. AE supports equality comparisons today, so DBAs can
continue to query encrypted columns as part of their SQL commands. Always Encrypted can be used with a
variety of key store options, such as Azure Key Vault, Windows certificate store, and local hardware security
modules.

C H A RA C T ERIST IC S A L WAY S EN C RY P T ED T RA N SPA REN T DATA EN C RY P T IO N

Encr yption span End-to-end At-rest data

Ser ver can access sensitive data No Yes, since encryption is for the data at
rest

Allowed T-SQL operations Equality comparison All T-SQL surface area is available

App changes required to use the Minimal Very Minimal


feature

Encr yption granularity Column level Database level

How can I limit access to sensitive data in my database


Every application has a certain bit of sensitive data in the database that needs to be protected from being visible
to everyone. Certain personnel within the organization need to view this data, however others shouldn’t be able
to view this data. One example is employee wages. A manager would need access to the wage information for
their direct reports however, the individual team members shouldn’t have access to the wage information of
their peers. Another scenario is data developers who might be interacting with sensitive data during
development stages or testing, for example, SSNs of customers. This information again doesn’t need to be
exposed to the developer. In such cases, your sensitive data either needs to be masked or not be exposed at all.
SQL Database offers two such approaches to prevent unauthorized users from being able to view sensitive data:
Dynamic Data Masking is a data masking feature that enables you to limit sensitive data exposure by masking it
to non-privileged users on the application layer. You define a masking rule that can create a masking pattern (for
example, to only show last four digits of a national ID SSN: XXX-XX-0000 and mark most of it as Xs) and identify
which users are to be excluded from the masking rule. The masking happens on-the-fly and there are various
masking functions available for various data categories. Dynamic data masking allows you to automatically
detect sensitive data in your database and apply masking to it.
Row Level security enables you to control access at the row level. Meaning, certain rows in a database table
based on the user executing the query (group membership or execution context) are hidden. The access
restriction is done on the database tier instead of in an application tier, to simplify your app logic. You start by
creating a filter predicate, filtering out rows that are not be exposed and the security policy next defining who
has access to these rows. Finally, the end user runs their query and, depending on the user’s privilege, they
either view those restricted rows or are unable to see them at all.
How do I manage encryption keys in the cloud
There are key management options for both Always Encrypted (client-side encryption) and Transparent Data
Encryption (encryption at rest). It’s recommended that you regularly rotate encryption keys. The rotation
frequency should align with both your internal organization regulations and compliance requirements.
Transparent Data Encryption (TDE)
There is a two-key hierarchy in TDE – the data in each user database is encrypted by a symmetric AES-256
database-unique database encryption key (DEK), which in turn is encrypted by a server-unique asymmetric RSA
2048 master key. The master key can be managed either:
Automatically by the platform - SQL Database.
Or by you using Azure Key Vault as the key store.
By default, the master key for Transparent Data Encryption is managed by the SQL Database service for
convenience. If your organization would like control over the master key, there is an option to use Azure Key
Vault](always-encrypted-azure-key-vault-configure.md) as the key store. By using Azure Key Vault, your
organization assumes control over key provisioning, rotation, and permission controls. Rotation or switching the
type of a TDE master key is fast, as it only re-encrypts the DEK. For organizations with separation of roles
between security and data management, a security admin could provision the key material for the TDE master
key in Azure Key Vault and provide an Azure Key Vault key identifier to the database administrator to use for
encryption at rest on a server. The Key Vault is designed such that Microsoft does not see or extract any
encryption keys. You also get a centralized management of keys for your organization.
Always Encrypted
There is also a two-key hierarchy in Always Encrypted - a column of sensitive data is encrypted by an AES 256-
column encryption key (CEK), which in turn is encrypted by a column master key (CMK). The client drivers
provided for Always Encrypted have no limitations on the length of CMKs. The encrypted value of the CEK is
stored on the database, and the CMK is stored in a trusted key store, such as Windows Certificate Store, Azure
Key Vault, or a hardware security module.
Both the CEK and CMK can be rotated.
CEK rotation is a size of data operation and can be time-intensive depending on the size of the tables
containing the encrypted columns. Hence it is prudent to plan CEK rotations accordingly.
CMK rotation, however, does not interfere with database performance, and can be done with separated roles.
The following diagram shows the key store options for the column master keys in Always Encrypted

How can I optimize and secure the traffic between my organization and SQL Database
The network traffic between your organization and SQL Database would generally get routed over the public
network. However, if you choose to optimize this path and make it more secure, you can look into Azure
ExpressRoute. ExpressRoute essentially lets you extend your corporate network into the Azure platform over a
private connection. By doing so, you do not go over the public Internet. You also get higher security, reliability,
and routing optimization that translates to lower network latencies and much faster speeds than you would
normally experience going over the public internet. If you are planning on transferring a significant chunk of
data between your organization and Azure, using ExpressRoute can yield cost benefits. You can choose from
three different connectivity models for the connection from your organization to Azure:
Cloud Exchange Co-location
Any-to-any
Point-to-Point
ExpressRoute also allows you to burst up to 2x the bandwidth limit you purchase for no additional charge. It is
also possible to configure cross region connectivity using ExpressRoute. To see a list of ExpressRoute
connectivity providers, see: ExpressRoute Partners and Peering Locations. The following articles describe Express
Route in more detail:
Introduction on Express Route
Prerequisites
Workflows
Is SQL Database compliant with any regulatory requirements, and how does that help with my own
organization's compliance
SQL Database is compliant with a range of regulatory compliancies. To view the latest set of compliancies that
have been met by SQL Database, visit the Microsoft Trust Center and drill down on the compliancies that are
important to your organization to see if SQL Database is included under the compliant Azure services. It is
important to note that although SQL Database may be certified as a compliant service, it aids in the compliance
of your organization’s service but does not automatically guarantee it.
Intelligent database monitoring and maintenance after migration
Once you’ve migrated your database to SQL Database, you are going to want to monitor your database (for
example, check how the resource utilization is like or DBCC checks) and perform regular maintenance (for
example, rebuild or reorganize indexes, statistics etc.). Fortunately, SQL Database is Intelligent in the sense that it
uses the historical trends and recorded metrics and statistics to proactively help you monitor and maintain your
database, so that your application runs optimally always. In some cases, Azure SQL Database can automatically
perform maintenance tasks depending on your configuration setup. There are three facets to monitoring your
database in SQL Database:
Performance monitoring and optimization.
Security optimization.
Cost optimization.
Performance monitoring and optimization
With Query Performance Insights, you can get tailored recommendations for your database workload so that
your applications can keep running at an optimal level - always. You can also set it up so that these
recommendations get applied automatically and you do not have to bother performing maintenance tasks. With
SQL Database Advisor, you can automatically implement index recommendations based on your workload - this
is called Auto-Tuning. The recommendations evolve as your application workload changes to provide you with
the most relevant suggestions. You also get the option to manually review these recommendations and apply
them at your discretion.
Security optimization
SQL Database provides actionable security recommendations to help you secure your data and threat detection
for identifying and investigating suspicious database activities that may pose a potential thread to the database.
Vulnerability assessment is a database scanning and reporting service that allows you to monitor the security
state of your databases at scale and identify security risks and drift from a security baseline defined by you. After
every scan, a customized list of actionable steps and remediation scripts is provided, as well as an assessment
report that can be used to help meet compliance requirements.
With Microsoft Defender for Cloud, you identify the security recommendations across the board and apply them
with a single click.
Cost optimization
Azure SQL platform analyzes the utilization history across the databases in a server to evaluate and recommend
cost-optimization options for you. This analysis usually takes a fortnight to analyze and build up actionable
recommendations. Elastic pool is one such option. The recommendation appears on the portal as a banner:
You can also view this analysis under the “Advisor” section:

How do I monitor the performance and resource utilization in SQL Database


In SQL Database you can leverage the intelligent insights of the platform to monitor the performance and tune
accordingly. You can monitor performance and resource utilization in SQL Database using the following
methods:
Azure portal
The Azure portal shows a database’s utilization by selecting the database and clicking the chart in the Overview
pane. You can modify the chart to show multiple metrics, including CPU percentage, DTU percentage, Data IO
percentage, Sessions percentage, and Database size percentage.
From this chart, you can also configure alerts by resource. These alerts allow you to respond to resource
conditions with an email, write to an HTTPS/HTTP endpoint or perform an action. For more information, see
Create alerts.
Dynamic management views
You can query the sys.dm_db_resource_stats dynamic management view to return resource consumption
statistics history from the last hour and the sys.resource_stats system catalog view to return history for the last
14 days.
Query Performance Insight
Query Performance Insight allows you to see a history of the top resource-consuming queries and long-running
queries for a specific database. You can quickly identify TOP queries by resource utilization, duration, and
frequency of execution. You can track queries and detect regression. This feature requires Query Store to be
enabled and active for the database.
Azure SQL Analytics (Preview) in Azure Monitor logs
Azure Monitor logs allows you to collect and visualize key Azure SQL Database performance metrics, supporting
up to 150,000 databases and 5,000 SQL Elastic pools per workspace. You can use it to monitor and receive
notifications. You can monitor SQL Database and elastic pool metrics across multiple Azure subscriptions and
elastic pools and can be used to identify issues at each layer of an application stack.
I am noticing performance issues: How does my SQL Database troubleshooting methodology differ from
SQL Server
A major portion of the troubleshooting techniques you would use for diagnosing query and database
performance issues remain the same. After all the same database engine powers the cloud. However, the
platform - Azure SQL Database has built in ‘intelligence’. It can help you troubleshoot and diagnose
performance issues even more easily. It can also perform some of these corrective actions on your behalf and in
some cases, proactively fix them - automatically.
Your approach towards troubleshooting performance issues can significantly benefit by using intelligent
features such as Query Performance Insight(QPI) and Database Advisor in conjunction and so the difference in
methodology differs in that respect – you no longer need to do the manual work of grinding out the essential
details that might help you troubleshoot the issue at hand. The platform does the hard work for you. One
example of that is QPI. With QPI, you can drill all the way down to the query level and look at the historical
trends and figure out when exactly the query regressed. The Database Advisor gives you recommendations on
things that might help you improve your overall performance in general like - missing indexes, dropping
indexes, parameterizing your queries etc.
With performance troubleshooting, it is important to identify whether it is just the application or the database
backing it, that’s impacting your application performance. Often the performance problem lies in the application
layer. It could be the architecture or the data access pattern. For example, consider you have a chatty application
that is sensitive to network latency. In this case, your application suffers because there would be many short
requests going back and forth ("chatty") between the application and the server and on a congested network,
these roundtrips add up fast. To improve the performance in this case, you can use Batch Queries. Using batches
helps you tremendously because now your requests get processed in a batch; thus, helping you cut down on the
roundtrip latency and improve your application performance.
Additionally, if you notice a degradation in the overall performance of your database, you can monitor the
sys.dm_db_resource_stats and sys.resource_stats dynamic management views in order to understand CPU, IO,
and memory consumption. Your performance maybe impacted because your database is starved of resources. It
could be that you may need to change the compute size and/or service tier based on the growing and shrinking
workload demands.
For a comprehensive set of recommendations for tuning performance issues, see: Tune your database.
How do I ensure I am using the appropriate service tier and compute size
SQL Database offers various service tiers Basic, Standard, and Premium. Each service tier you get a guaranteed
predictable performance tied to that service tier. Depending on your workload, you may have bursts of activity
where your resource utilization might hit the ceiling of the current compute size that you are in. In such cases, it
is useful to first start by evaluating whether any tuning can help (for example, adding or altering an index etc.). If
you still encounter limit issues, consider moving to a higher service tier or compute size.

SERVIC E T IER C O M M O N USE C A SE SC EN A RIO S

Basic Applications with a handful users and a database that


doesn’t have high concurrency, scale, and performance
requirements.

Standard Applications with a considerable concurrency, scale, and


performance requirements coupled with low to medium IO
demands.

Premium Applications with lots of concurrent users, high


CPU/memory, and high IO demands. High concurrency, high
throughput, and latency sensitive apps can leverage the
Premium level.

For making sure you’re on the right compute size, you can monitor your query and database resource
consumption through one of the above-mentioned ways in “How do I monitor the performance and resource
utilization in SQL Database”. Should you find that your queries/databases are consistently running hot on
CPU/Memory etc. you can consider scaling up to a higher compute size. Similarly, if you note that even during
your peak hours, you don’t seem to use the resources as much; consider scaling down from the current
compute size.
If you have a SaaS app pattern or a database consolidation scenario, consider using an Elastic pool for cost
optimization. Elastic pool is a great way to achieve database consolidation and cost-optimization. To read more
about managing multiple databases using elastic pool, see: Manage pools and databases.
How often do I need to run database integrity checks for my database
SQL Database uses some smart techniques that allow it to handle certain classes of data corruption
automatically and without any data loss. These techniques are built in to the service and are leveraged by the
service when need arises. On a regular basis, your database backups across the service are tested by restoring
them and running DBCC CHECKDB on it. If there are issues, SQL Database proactively addresses them.
Automatic page repair is leveraged for fixing pages that are corrupt or have data integrity issues. The database
pages are always verified with the default CHECKSUM setting that verifies the integrity of the page. SQL
Database proactively monitors and reviews the data integrity of your database and, if issues arise, addresses
them with the highest priority. In addition to these, you may choose to optionally run your own integrity checks
at your will. For more information, see Data Integrity in SQL Database
Data movement after migration
How do I export and import data as BACPAC files from SQL Database using the Azure portal
Expor t : You can export your database in Azure SQL Database as a BACPAC file from the Azure portal

Impor t : You can also import data as a BACPAC file into your database in Azure SQL Database using the
Azure portal.

How do I synchronize data between SQL Database and SQL Server


You have several ways to achieve this:
Data Sync – This feature helps you synchronize data bi-directionally between multiple SQL Server
databases and SQL Database. To sync with SQL Server databases, you need to install and configure sync
agent on a local computer or a virtual machine and open the outbound TCP port 1433.
Transaction Replication – With transaction replication you can synchronize your data from a SQL Server
database to Azure SQL Database with the SQL Server instance being the publisher and the Azure SQL
Database being the subscriber. For now, only this setup is supported. For more information on how to
migrate your data from a SQL Server database to Azure SQL with minimal downtime, see: Use Transaction
Replication

Next steps
Learn about SQL Database.
Import or export an Azure SQL Database without
allowing Azure services to access the server
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article shows you how to import or export an Azure SQL Database when Allow Azure Services is set to OFF
on the server. The workflow uses an Azure virtual machine to run SqlPackage to perform the import or export
operation.

Sign in to the Azure portal


Sign in to the Azure portal.

Create the Azure virtual machine


Create an Azure virtual machine by selecting the Deploy to Azure button.
This template allows you to deploy a simple Windows virtual machine using a few different options for the
Windows version, using the latest patched version. This will deploy a A2 size VM in the resource group location
and return the fully qualified domain name of the VM.

For more information, see Very simple deployment of a Windows VM.

Connect to the virtual machine


The following steps show you how to connect to your virtual machine using a remote desktop connection.
1. After deployment completes, go to the virtual machine resource.
2. Select Connect .
A Remote Desktop Protocol file (.rdp file) form appears with the public IP address and port number for
the virtual machine.

3. Select Download RDP File .

NOTE
You can also use SSH to connect to your VM.
4. Close the Connect to vir tual machine form.
5. To connect to your VM, open the downloaded RDP file.
6. When prompted, select Connect . On a Mac, you need an RDP client such as this Remote Desktop Client
from the Mac App Store.
7. Enter the username and password you specified when creating the virtual machine, then choose OK .
8. You might receive a certificate warning during the sign-in process. Choose Yes or Continue to proceed
with the connection.

Install SqlPackage
Download and install the latest version of SqlPackage.
For additional information, see SqlPackage.exe.

Create a firewall rule to allow the VM access to the database


Add the virtual machine's public IP address to the server's firewall.
The following steps create a server-level IP firewall rule for your virtual machine's public IP address and enables
connectivity from the virtual machine.
1. Select SQL databases from the left-hand menu and then select your database on the SQL databases
page. The overview page for your database opens, showing you the fully qualified server name (such as
ser vername.database.windows.net ) and provides options for further configuration.
2. Copy this fully qualified server name to use when connecting to your server and its databases.

3. Select Set ser ver firewall on the toolbar. The Firewall settings page for the server opens.
4. Choose Add client IP on the toolbar to add your virtual machine's public IP address to a new server-
level IP firewall rule. A server-level IP firewall rule can open port 1433 for a single IP address or a range
of IP addresses.
5. Select Save . A server-level IP firewall rule is created for your virtual machine's public IP address opening
port 1433 on the server.
6. Close the Firewall settings page.

Export a database using SqlPackage


To export an Azure SQL Database using the SqlPackage command-line utility, see Export parameters and
properties. The SqlPackage utility ships with the latest versions of SQL Server Management Studio and SQL
Server Data Tools, or you can download the latest version of SqlPackage.
We recommend the use of the SqlPackage utility for scale and performance in most production environments.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from SQL
Server to Azure SQL Database using BACPAC Files.
This example shows how to export a database using SqlPackage.exe with Active Directory Universal
Authentication. Replace with values that are specific to your environment.

SqlPackage.exe /a:Export /tf:testExport.bacpac /scs:"Data Source=<servername>.database.windows.net;Initial


Catalog=MyDB;" /ua:True /tid:"apptest.onmicrosoft.com"

Import a database using SqlPackage


To import a SQL Server database using the SqlPackage command-line utility, see import parameters and
properties. SqlPackage has the latest SQL Server Management Studio and SQL Server Data Tools. You can also
download the latest version of SqlPackage.
For scale and performance, we recommend using SqlPackage in most production environments rather than
using the Azure portal. For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see
migrating from SQL Server to Azure SQL Database using BACPAC Files.
The following SqlPackage command imports the AdventureWorks2017 database from local storage to an
Azure SQL Database. It creates a new database called myMigratedDatabase with a Premium service tier and
a P6 Service Objective. Change these values as appropriate for your environment.

sqlpackage.exe /a:import /tcs:"Data Source=<serverName>.database.windows.net;Initial


Catalog=myMigratedDatabase>;User Id=<userId>;Password=<password>" /sf:AdventureWorks2017.bacpac
/p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6

IMPORTANT
To connect to tAzure SQL Database from behind a corporate firewall, the firewall must have port 1433 open.

This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.

sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.database.windows.net


/ua:True /tid:"apptest.onmicrosoft.com"

Performance considerations
Export speeds vary due to many factors (for example, data shape) so it's impossible to predict what speed
should be expected. SqlPackage may take considerable time, particularly for large databases.
To get the best performance you can try the following strategies:
1. Make sure no other workload is running on the database. Create a copy before export may be the best
solution to ensure no other workloads are running.
2. Increase database service level objective (SLO) to better handle the export workload (primarily read I/O). If
the database is currently GP_Gen5_4, perhaps a Business Critical tier would help with read workload.
3. Make sure there are clustered indexes particularly for large tables.
4. Virtual machines (VMs) should be in the same region as the database to help avoid network constraints.
5. VMs should have SSD with adequate size for generating temp artifacts before uploading to blob storage.
6. VMs should have adequate core and memory configuration for the specific database.

Store the imported or exported .BACPAC file


The .BACPAC file can be stored in Azure Blobs, or Azure Files.
To achieve the best performance, use Azure Files. SqlPackage operates with the filesystem so it can access Azure
Files directly.
To reduce cost, use Azure Blobs, which cost less than a premium Azure file share. However, it will require you to
copy the .BACPAC file between the the blob and the local file system before the import or export operation. As a
result the process will take longer.
To upload or download .BACPAC files, see Transfer data with AzCopy and Blob storage, and Transfer data with
AzCopy and file storage.
Depending on your environment, you might need to Configure Azure Storage firewalls and virtual networks.

Next steps
To learn how to connect to and query an imported SQL Database, see Quickstart: Azure SQL Database: Use
SQL Server Management Studio to connect and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Import or export an Azure SQL Database using
Private Link without allowing Azure services to
access the server
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Running Import or Export via Azure PowerShell or Azure portal requires you to set Allow Access to Azure
Services to ON, otherwise Import/Export operation fails with an error. Often, users want to perform Import or
Export using a private end point without allowing access to all Azure services.

What is Import-Export Private Link?


Import Export Private Link is a Service Managed Private Endpoint created by Microsoft and that is exclusively
used by the Import-Export, database and Azure Storage services for all communications. The private end point
has to be manually approved by user in the Azure portal for both server and storage.

To use Private Link with Import-Export, user database and Azure Storage blob container must be hosted on the
same type of Azure Cloud. For example, either both in Azure Commercial or both on Azure Gov. Hosting across
cloud types isn't supported.
This article explains how to import or export an Azure SQL Database using Private Link with Allow Azure
Services is set to OFF on the Azure SQL server.

NOTE
Import Export using Private Link for Azure SQL Database is currently in preview
IMPORTANT
Import or Export of a database from Azure SQL Managed Instance or from a database in the Hyperscale service tier using
PowerShell isn't currently supported.

Configure Import-Export Private Link


Import-Export Private Link can be configured via Azure portal, PowerShell or using REST API.
Configure Import-Export Private link using Azure portal
Create Import Private Link
1. Go to server into which you would like to import database. Select Import database from toolbar in Overview
page.
2. In Import Database page, select Use Private Link option
3. Enter the storage account, server credentials, Database details and select on Ok
Create Export Private Link
1. Go to the database that you would like to export. Select Export database from toolbar in Overview page
2. In Export Database page, select Use Private Link option
3. Enter the storage account, server sign-in credentials, Database details and select Ok
Approve Private End Points
A p p r o v e P r i v a t e En d p o i n t s i n P r i v a t e L i n k C e n t e r

1. Go to Private Link Center


2. Navigate to Private endpoints section
3. Approve the private endpoints you created using Import/Export service
A p p r o v e P r i v a t e En d P o i n t c o n n e c t i o n o n A z u r e SQ L D a t a b a se

1. Go to the server that hosts the database.


2. Open the ‘Private endpoint connections’ page in security section on the left.
3. Select the private endpoint you want to approve.
4. Select Approve to approve the connection.

A p p r o v e P r i v a t e En d P o i n t c o n n e c t i o n o n A z u r e St o r a g e

1. Go to the storage account that hosts the blob container that holds BACPAC file.
2. Open the ‘Private endpoint connections’ page in security section on the left.
3. Select the Import-Export private endpoints you want to approve.
4. Select Approve to approve the connection.

After the Private End points are approved both in Azure SQL Server and Storage account, Import or Export jobs
will be kicked off. Until then, the jobs will be on hold.
You can check the status of Import or Export jobs in Import-Export History page under Data Management
section in Azure SQL Server page.
Configure Import-Export Private Link using PowerShell
Import a Database using Private link in PowerShell
Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.

$importRequest = New-AzSqlDatabaseImport -ResourceGroupName "<resourceGroupName>" `


-ServerName "<serverName>" -DatabaseName "<databaseName>" `
-DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
-StorageKey $(Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName `
-StorageAccountName "<storageAccountName>").Value[0]
-StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
-Edition "Standard" -ServiceObjectiveName "P6" ` -UseNetworkIsolation $true `
-StorageAccountResourceIdForPrivateLink
"/subscriptions/<subscriptionId>/resourcegroups/<resource_group_name>/providers/Microsoft.Storage/storageAcc
ounts/<storage_account_name>" `
-SqlServerResourceIdForPrivateLink
"/subscriptions/<subscriptionId>/resourceGroups/<resource_group_name>/providers/Microsoft.Sql/servers/<serve
r_name>" `
-AdministratorLogin "<userID>" `
-AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)

Export a Database using Private Link in PowerShell


Use the New-AzSqlDatabaseExport cmdlet to submit an export database request to the Azure SQL Database
service. Depending on the size of your database, the export operation may take some time to complete.

$importRequest = New-AzSqlDatabaseExport -ResourceGroupName "<resourceGroupName>" `


-ServerName "<serverName>" -DatabaseName "<databaseName>" `
-DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
-StorageKey $(Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName `
-StorageAccountName "<storageAccountName>").Value[0]
-StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
-Edition "Standard" -ServiceObjectiveName "P6" ` -UseNetworkIsolation $true `
-StorageAccountResourceIdForPrivateLink
"/subscriptions/<subscriptionId>/resourcegroups/<resource_group_name>/providers/Microsoft.Storage/storageAcc
ounts/<storage_account_name>" `
-SqlServerResourceIdForPrivateLink
"/subscriptions/<subscriptionId>/resourceGroups/<resource_group_name>/providers/Microsoft.Sql/servers/<serve
r_name>" `
-AdministratorLogin "<userID>" `
-AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)

Create Import-Export Private link using REST API


Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to Import
Database API

Limitations
Import using Private Link does not support specifying a backup storage redundancy while creating a new
database and creates with the default geo-redundant backup storage redundancy. As a work around, first
create an empty database with desired backup storage redundancy using Azure portal or PowerShell and
then import the BACPAC into this empty database.
Import and Export operations are not supported in Azure SQL DB Hyperscale tier yet.
Import using REST API with private link can only be done to existing database since the API uses database
extensions. To workaround this create an empty database with desired name and call Import REST API with
Private link.

Next steps
Import or Export Azure SQL Database without allowing Azure services to access the server
Import a database from a BACPAC file
Quickstart: Import a BACPAC file to a database in
Azure SQL Database or Azure SQL Managed
Instance
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can import a SQL Server database into Azure SQL Database or SQL Managed Instance using a BACPAC file.
You can import the data from a BACPAC file stored in Azure Blob storage (standard storage only) or from local
storage in an on-premises location. To maximize import speed by providing more and faster resources, scale
your database to a higher service tier and compute size during the import process. You can then scale down
after the import is successful.

NOTE
The imported database's compatibility level is based on the source database's compatibility level.

IMPORTANT
After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the
AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for
operating a database at a specific compatibility level, see ALTER DATABASE Compatibility Level. See also ALTER DATABASE
SCOPED CONFIGURATION for information about additional database-level settings related to compatibility levels.

NOTE
Import and Export using Private Link is in preview.

Using Azure portal


Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:

The Azure portal only supports creating a single database in Azure SQL Database and only from a BACPAC file
stored in Azure Blob storage.
To migrate a database into an Azure SQL Managed Instance from a BACPAC file, use SQL Server Management
Studio or SQLPackage, using the Azure portal or Azure PowerShell is not currently supported.
NOTE
Machines processing import/export requests submitted through the Azure portal or PowerShell need to store the
BACPAC file as well as temporary files generated by the Data-Tier Application Framework (DacFX). The disk space required
varies significantly among databases with the same size and can require disk space up to 3 times the size of the database.
Machines running the import/export request only have 450GB local disk space. As a result, some requests may fail with
the error There is not enough space on the disk . In this case, the workaround is to run sqlpackage.exe on a machine
with enough local disk space. We encourage using SqlPackage to import/export databases larger than 150GB to avoid
this issue.

1. To import from a BACPAC file into a new single database using the Azure portal, open the appropriate
server page and then, on the toolbar, select Impor t database .

2. Select the storage account and the container for the BACPAC file and then select the BACPAC file from
which to import.
3. Specify the new database size (usually the same as origin) and provide the destination SQL Server
credentials. For a list of possible values for a new database in Azure SQL Database, see Create Database.
4. Click OK .
5. To monitor an import's progress, open the database's server page, and, under Settings , select
Impor t/Expor t histor y . When successful, the import has a Completed status.
6. To verify the database is live on the server, select SQL databases and verify the new database is Online .

Using SqlPackage
To import a SQL Server database using the SqlPackage command-line utility, see import parameters and
properties. SQL Server Management Studio and SQL Server Data Tools for Visual Studio include SqlPackage.
You can also download the latest SqlPackage from the Microsoft download center.
For scale and performance, we recommend using SqlPackage in most production environments rather than
using the Azure portal. For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see
migrating from SQL Server to Azure SQL Database using BACPAC Files.
The DTU based provisioning model supports select database max size values for each tier. When importing a
database use one of these supported values.
The following SqlPackage command imports the AdventureWorks2008R2 database from local storage to a
logical SQL server named mynewser ver20170403 . It creates a new database called myMigratedDatabase
with a Premium service tier and a P6 Service Objective. Change these values as appropriate for your
environment.

sqlpackage.exe /a:import /tcs:"Data Source=<serverName>.database.windows.net;Initial Catalog=


<migratedDatabase>;User Id=<userId>;Password=<password>" /sf:AdventureWorks2008R2.bacpac
/p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P6

IMPORTANT
To connect to Azure SQL Database from behind a corporate firewall, the firewall must have port 1433 open. To connect to
SQL Managed Instance, you must have a point-to-site connection or an express route connection.

This example shows how to import a database using SqlPackage with Active Directory Universal Authentication.
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.database.windows.net
/ua:True /tid:"apptest.onmicrosoft.com"

Using PowerShell
NOTE
A SQL Managed Instance does not currently support migrating a database into an instance database from a BACPAC file
using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage.

NOTE
The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as
well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly
among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request
only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In
this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting
databases larger than 150GB, use SqlPackage to avoid this issue.

PowerShell
Azure CLI

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported, but all future development is for the Az.Sql
module. The AzureRM module will continue to receive bug fixes until at least December 2020. The arguments for the
commands in the Az module and in the AzureRm modules are substantially identical. For more about their compatibility,
see Introducing the new Azure PowerShell Az module.

Use the New-AzSqlDatabaseImport cmdlet to submit an import database request to Azure. Depending on
database size, the import may take some time to complete. The DTU based provisioning model supports select
database max size values for each tier. When importing a database use one of these supported values.

$importRequest = New-AzSqlDatabaseImport -ResourceGroupName "<resourceGroupName>" `


-ServerName "<serverName>" -DatabaseName "<databaseName>" `
-DatabaseMaxSizeBytes "<databaseSizeInBytes>" -StorageKeyType "StorageAccessKey" `
-StorageKey $(Get-AzStorageAccountKey `
-ResourceGroupName "<resourceGroupName>" -StorageAccountName "<storageAccountName>").Value[0] `
-StorageUri "https://myStorageAccount.blob.core.windows.net/importsample/sample.bacpac" `
-Edition "Standard" -ServiceObjectiveName "P6" `
-AdministratorLogin "<userId>" `
-AdministratorLoginPassword $(ConvertTo-SecureString -String "<password>" -AsPlainText -Force)

You can use the Get-AzSqlDatabaseImportExportStatus cmdlet to check the import's progress. Running the
cmdlet immediately after the request usually returns Status: InProgress . The import is complete when you see
Status: Succeeded .
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink $importRequest.OperationStatusLink

[Console]::Write("Importing")
while ($importStatus.Status -eq "InProgress") {
$importStatus = Get-AzSqlDatabaseImportExportStatus -OperationStatusLink
$importRequest.OperationStatusLink
[Console]::Write(".")
Start-Sleep -s 10
}

[Console]::WriteLine("")
$importStatus

TIP
For another script example, see Import a database from a BACPAC file.

Cancel the import request


Use the Database Operations - Cancel API or the PowerShell Stop-AzSqlDatabaseActivity command, here an
example of powershell command.

Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName


$DatabaseName -OperationId $Operation.OperationId

Limitations
Importing to a database in elastic pool isn't supported. You can import data into a single database and then
move the database to an elastic pool.
Import Export Service does not work when Allow access to Azure services is set to OFF. However you can
work around the problem by manually running sqlpackage.exe from an Azure VM or performing the export
directly in your code by using the DACFx API.
Import does not support specifying a backup storage redundancy while creating a new database and creates
with the default geo-redundant backup storage redundancy. To workaround, first create an empty database
with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into
this empty database.
Storage behind a firewall is currently not supported.

Import using wizards


You can also use these wizards.
Import Data-tier Application Wizard in SQL Server Management Studio.
SQL Server Import and Export Wizard.

Next steps
To learn how to connect to and query a database in Azure SQL Database, see Quickstart: Azure SQL
Database: Use SQL Server Management Studio to connect to and query data.
For a SQL Server Customer Advisory Team blog about migrating using BACPAC files, see Migrating from
SQL Server to Azure SQL Database using BACPAC Files.
For a discussion of the entire SQL Server database migration process, including performance
recommendations, see SQL Server database migration to Azure SQL Database.
To learn how to manage and share storage keys and shared access signatures securely, see Azure Storage
Security Guide.
Copy a transactionally consistent copy of a
database in Azure SQL Database
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database provides several methods for creating a copy of an existing database on either the same
server or a different server. You can copy a database by using Azure portal, PowerShell, Azure CLI, or T-SQL.

Overview
A database copy is a transactionally consistent snapshot of the source database as of a point in time after the
copy request is initiated. You can select the same server or a different server for the copy. Also you can choose
to keep the backup redundancy and compute size of the source database, or use a different backup storage
redundancy and/or compute size within the same service tier. After the copy is complete, it becomes a fully
functional, independent database. The logins, users, and permissions in the copied database are managed
independently from the source database. The copy is created using the geo-replication technology. Once replica
seeding is complete, the geo-replication link is automatically terminated. All the requirements for using geo-
replication apply to the database copy operation. See Active geo-replication overview for details.

Database Copy for Azure SQL Hyperscale


For Azure SQL Hyperscale the target database determines whether the copy will be a fast copy or a size of data
copy.
Fast copy: When the copy is done in the same region as the source, the copy will be created from the snapshots
of blobs, this copy is a fast operation regardless of the database size.
Size of data copy: When the target database is in a different region than the source or if the database backup
storage redundancy (Local, Zonal, Geo) from the target differs from the source database, the copy operation will
be a size of data operation. Copy time will not be directly proportional to size as page server blobs are copied in
parallel.

Logins in the database copy


When you copy a database to the same server, the same logins can be used on both databases. The security
principal you use to copy the database becomes the database owner on the new database.
When you copy a database to a different server, the security principal that initiated the copy operation on the
target server becomes the owner of the new database.
Regardless of the target server, all database users, their permissions, and their security identifiers (SIDs) are
copied to the database copy. Using contained database users for data access ensures that the copied database
has the same user credentials, so that after the copy is complete you can immediately access it with the same
credentials.
If you use server level logins for data access and copy the database to a different server, the login-based access
might not work. This can happen because the logins do not exist on the target server, or because their
passwords and security identifiers (SIDs) are different. To learn about managing logins when you copy a
database to a different server, see How to manage Azure SQL Database security after disaster recovery. After the
copy operation to a different server succeeds, and before other users are remapped, only the login associated
with the database owner, or the server administrator can log in to the copied database. To resolve logins and
establish data access after the copying operation is complete, see Resolve logins.

Copy using the Azure portal


To copy a database by using the Azure portal, open the page for your database, and then click Copy .

Copy using PowerShell or the Azure CLI


To copy a database, use the following examples.
PowerShell
Azure CLI

For PowerShell, use the New-AzSqlDatabaseCopy cmdlet.

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by Azure SQL Database, but all future
development is for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December
2020. The arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For
more about their compatibility, see Introducing the new Azure PowerShell Az module.

New-AzSqlDatabaseCopy -ResourceGroupName "<resourceGroup>" -ServerName $sourceserver -DatabaseName "


<databaseName>" `
-CopyResourceGroupName "myResourceGroup" -CopyServerName $targetserver -CopyDatabaseName
"CopyOfMySampleDatabase"

The database copy is an asynchronous operation but the target database is created immediately after the
request is accepted. If you need to cancel the copy operation while still in progress, drop the the target database
using the Remove-AzSqlDatabase cmdlet.
For a complete sample PowerShell script, see Copy a database to a new server.

Copy using Transact-SQL


Log in to the master database with the server administrator login or the login that created the database you
want to copy. For database copy to succeed, logins that are not the server administrator must be members of
the dbmanager role. For more information about logins and connecting to the server, see Manage logins.
Start copying the source database with the CREATE DATABASE ... AS COPY OF statement. The T-SQL statement
continues running until the database copy operation is complete.

NOTE
Terminating the T-SQL statement does not terminate the database copy operation. To terminate the operation, drop the
target database.
Database copy using T-SQL is not supported when connecting to the destination server over a private endpoint. If a
private endpoint is configured but public network access is allowed, database copy is supported when connected to the
destination server from a public IP address. Once the copy operation completes, public access can be denied.

IMPORTANT
Selecting backup storage redundancy when using T-SQL CREATE DATABASE ... AS COPY OF command is not supported
yet.

Copy to the same server


Log in to the master database with the server administrator login or the login that created the database you
want to copy. For database copying to succeed, logins that are not the server administrator must be members of
the dbmanager role.
This command copies Database1 to a new database named Database2 on the same server. Depending on the
size of your database, the copying operation might take some time to complete.

-- Execute on the master database to start copying


CREATE DATABASE Database2 AS COPY OF Database1;

Copy to an elastic pool


Log in to the master database with the server administrator login or the login that created the database you
want to copy. For database copying to succeed, logins that are not the server administrator must be members of
the dbmanager role.
This command copies Database1 to a new database named Database2 in an elastic pool named pool1.
Depending on the size of your database, the copying operation might take some time to complete.
Database1 can be a single or pooled database. Copying between different tier pools is supported, but some
cross-tier copies will not succeed. For example, you can copy a single or elastic standard db into a General
Purpose pool, but you can't copy a standard elastic db into a premium pool.

-- Execute on the master database to start copying


CREATE DATABASE Database2
AS COPY OF Database1
(SERVICE_OBJECTIVE = ELASTIC_POOL( name = pool1 ));

Copy to a different server


Log in to the master database of the target server where the new database is to be created. Use a login that has
the same name and password as the database owner of the source database on the source server. The login on
the target server must also be a member of the dbmanager role, or be the server administrator login.
This command copies Database1 on server1 to a new database named Database2 on server2. Depending on the
size of your database, the copying operation might take some time to complete.
-- Execute on the master database of the target server (server2) to start copying from Server1 to Server2
CREATE DATABASE Database2 AS COPY OF server1.Database1;

IMPORTANT
Both servers' firewalls must be configured to allow inbound connection from the IP of the client issuing the T-SQL CREATE
DATABASE ... AS COPY OF command. To determine the source IP address of current connection, execute
SELECT client_net_address FROM sys.dm_exec_connections WHERE session_id = @@SPID;

Similarly, the below command copies Database1 on server1 to a new database named Database2 within an
elastic pool called pool2, on server2.

-- Execute on the master database of the target server (server2) to start copying from Server1 to Server2
CREATE DATABASE Database2 AS COPY OF server1.Database1 (SERVICE_OBJECTIVE = ELASTIC_POOL( name = pool2 ) );

Copy to a different subscription


You can use the steps in the Copy a SQL Database to a different server section to copy your database to a server
in a different subscription using T-SQL. Make sure you use a login that has the same name and password as the
database owner of the source database. Additionally, the login must be a member of the dbmanager role or a
server administrator, on both source and target servers.

TIP
When copying databases in the same Azure Active Directory tenant, authorization on the source and destination servers
is simplified if you initiate the copy command using an AAD authentication login with sufficient access on both servers.
The minimum necessary level of access is membership in the dbmanager role in the master database on both servers.
For example, you can use an AAD login is a member of an AAD group designated as the server administrator on both
servers.
--Step# 1
--Create login and user in the master database of the source server.

CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx'


GO
CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
GO
ALTER ROLE dbmanager ADD MEMBER loginname;
GO

--Step# 2
--Create the user in the source database and grant dbowner permission to the database.

CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];


GO
ALTER ROLE db_owner ADD MEMBER loginname;
GO

--Step# 3
--Capture the SID of the user "loginname" from master database

SELECT [sid] FROM sysusers WHERE [name] = 'loginname';

--Step# 4
--Connect to Destination server.
--Create login and user in the master database, same as of the source server.

CREATE LOGIN loginname WITH PASSWORD = 'xxxxxxxxx', SID = [SID of loginname login on source server];
GO
CREATE USER [loginname] FOR LOGIN [loginname] WITH DEFAULT_SCHEMA=[dbo];
GO
ALTER ROLE dbmanager ADD MEMBER loginname;
GO

--Step# 5
--Execute the copy of database script from the destination server using the credentials created

CREATE DATABASE new_database_name


AS COPY OF source_server_name.source_database_name;

NOTE
The Azure portal, PowerShell, and the Azure CLI do not support database copy to a different subscription.

TIP
Database copy using T-SQL supports copying a database from a subscription in a different Azure tenant. This is only
supported when using a SQL authentication login to log in to the target server. Creating a database copy on a logical
server in a different Azure tenant is not supported when Azure Active Directory auth is active (enabled) on either source
or target logical server.

Monitor the progress of the copying operation


Monitor the copying process by querying the sys.databases, sys.dm_database_copies, and
sys.dm_operation_status views. While the copying is in progress, the state_desc column of the sys.databases
view for the new database is set to COPYING .
If the copying fails, the state_desc column of the sys.databases view for the new database is set to
SUSPECT . Execute the DROP statement on the new database, and try again later.
If the copying succeeds, the state_desc column of the sys.databases view for the new database is set to
ONLINE . The copying is complete, and the new database is a regular database that can be changed
independent of the source database.

NOTE
If you decide to cancel the copying while it is in progress, execute the DROP DATABASE statement on the new database.

IMPORTANT
If you need to create a copy with a substantially smaller service objective than the source, the target database may not
have sufficient resources to complete the seeding process and it can cause the copy operation to fail. In this scenario use
a geo-restore request to create a copy in a different server and/or a different region. See Recover an Azure SQL Database
using database backups for more information.

Azure RBAC roles and permissions to manage database copy


To create a database copy, you will need to be in the following roles
Subscription Owner or
SQL Server Contributor role or
Custom role on the source and target databases with following permission:
Microsoft.Sql/servers/databases/read Microsoft.Sql/servers/databases/write
To cancel a database copy, you will need to be in the following roles
Subscription Owner or
SQL Server Contributor role or
Custom role on the source and target databases with following permission:
Microsoft.Sql/servers/databases/read Microsoft.Sql/servers/databases/write
To manage database copy using the Azure portal, you will also need the following permissions:
Microsoft.Resources/subscriptions/resources/read Microsoft.Resources/subscriptions/resources/write
Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write
Microsoft.Resources/deployments/operationstatuses/read
If you want to see the operations under deployments in the resource group on the portal, operations across
multiple resource providers including SQL operations, you will need these additional permissions:
Microsoft.Resources/subscriptions/resourcegroups/deployments/operations/read
Microsoft.Resources/subscriptions/resourcegroups/deployments/operationstatuses/read

Resolve logins
After the new database is online on the target server, use the ALTER USER statement to remap the users from
the new database to logins on the target server. To resolve orphaned users, see Troubleshoot Orphaned Users.
See also How to manage Azure SQL Database security after disaster recovery.
All users in the new database retain the permissions that they had in the source database. The user who initiated
the database copy becomes the database owner of the new database. After the copying succeeds and before
other users are remapped, only the database owner can log in to the new database.
To learn about managing users and logins when you copy a database to a different server, see How to manage
Azure SQL Database security after disaster recovery.

Database copy errors


The following errors can be encountered while copying a database in Azure SQL Database. For more
information, see Copy an Azure SQL Database.

ERRO R C O DE SEVERIT Y DESC RIP T IO N

40635 16 Client with IP address '%.*ls' is


temporarily disabled.

40637 16 Create database copy is currently


disabled.

40561 16 Database copy failed. Either the source


or target database does not exist.

40562 16 Database copy failed. The source


database has been dropped.

40563 16 Database copy failed. The target


database has been dropped.

40564 16 Database copy failed due to an internal


error. Please drop target database and
try again.

40565 16 Database copy failed. No more than 1


concurrent database copy from the
same source is allowed. Please drop
target database and try again later.

40566 16 Database copy failed due to an internal


error. Please drop target database and
try again.

40567 16 Database copy failed due to an internal


error. Please drop target database and
try again.

40568 16 Database copy failed. Source database


has become unavailable. Please drop
target database and try again.

40569 16 Database copy failed. Target database


has become unavailable. Please drop
target database and try again.

40570 16 Database copy failed due to an internal


error. Please drop target database and
try again later.
ERRO R C O DE SEVERIT Y DESC RIP T IO N

40571 16 Database copy failed due to an internal


error. Please drop target database and
try again later.

Next steps
For information about logins, see Manage logins and How to manage Azure SQL Database security after
disaster recovery.
To export a database, see Export the database to a BACPAC.
Replication to Azure SQL Database
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You can configure an Azure SQL Database as the push subscriber in a one-way transactional or snapshot
replication topology.

NOTE
This article describes the use of transactional replication in Azure SQL Database. It is unrelated to active geo-replication,
an Azure SQL Database feature that allows you to create complete readable replicas of individual databases.

Supported configurations
Azure SQL Database can only be the push subscriber of a SQL Server publisher and distributor.
The SQL Server instance acting as publisher and/or distributor can be an instance of SQL Server running on-
premises, an Azure SQL Managed Instance, or an instance of SQL Server running on an Azure virtual
machine in the cloud.
The distribution database and the replication agents cannot be placed on a database in Azure SQL Database.
Snapshot and one-way transactional replication are supported. Peer-to-peer transactional replication and
merge replication are not supported.
Versions
To successfully replicate to a database in Azure SQL Database, SQL Server publishers and distributors must be
using (at least) one of the following versions:
Publishing to any Azure SQL Database from a SQL Server database is supported by the following versions of
SQL Server:
SQL Server 2016 and greater
SQL Server 2014 RTM CU10 (12.0.4427.24) or SP1 CU3 (12.0.2556.4)
SQL Server 2012 SP2 CU8 (11.0.5634.1) or SP3 (11.0.6020.0)

NOTE
Attempting to configure replication using an unsupported version can result in error number MSSQL_REPL20084 (The
process could not connect to Subscriber.) and MSSQL_REPL40532 (Cannot open server <name> requested by the login.
The login failed.).

To use all the features of Azure SQL Database, you must be using the latest versions of SQL Server Management
Studio and SQL Server Data Tools.
Types of replication
There are different types of replication:

REP L IC AT IO N A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Standard Transactional Yes (only as subscriber) Yes


REP L IC AT IO N A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Snapshot Yes (only as subscriber) Yes

Merge replication No No

Peer-to-peer No No

Bidirectional No Yes

Updatable subscriptions No No

Remarks
Only push subscriptions to Azure SQL Database are supported.
Replication can be configured by using SQL Server Management Studio or by executing Transact-SQL
statements on the publisher. You cannot configure replication by using the Azure portal.
Replication can only use SQL Server authentication logins to connect to Azure SQL Database.
Replicated tables must have a primary key.
You must have an existing Azure subscription.
The Azure SQL Database subscriber can be in any region.
A single publication on SQL Server can support both Azure SQL Database and SQL Server (on-premises and
SQL Server in an Azure virtual machine) subscribers.
Replication management, monitoring, and troubleshooting must be performed from SQL Server rather than
Azure SQL Database.
Only @subscriber_type = 0 is supported in sp_addsubscription for SQL Database.
Azure SQL Database does not support bi-directional, immediate, updatable, or peer-to-peer replication.

Replication Architecture

Scenarios
Typical Replication Scenario
1. Create a transactional replication publication on a SQL Server database.
2. On SQL Server use the New Subscription Wizard or Transact-SQL statements to create a push to
subscription to Azure SQL Database.
3. With single and pooled databases in Azure SQL Database, the initial data set is a snapshot that is created by
the Snapshot Agent and distributed and applied by the Distribution Agent. With a SQL Managed Instance
publisher, you can also use a database backup to seed the Azure SQL Database subscriber.
Data migration scenario
1. Use transactional replication to replicate data from a SQL Server database to Azure SQL Database.
2. Redirect the client or middle-tier applications to update the database copy.
3. Stop updating the SQL Server version of the table and remove the publication.

Limitations
The following options are not supported for Azure SQL Database subscriptions:
Copy file groups association
Copy table partitioning schemes
Copy index partitioning schemes
Copy user defined statistics
Copy default bindings
Copy rule bindings
Copy fulltext indexes
Copy XML XSD
Copy XML indexes
Copy permissions
Copy spatial indexes
Copy filtered indexes
Copy data compression attribute
Copy sparse column attribute
Convert filestream to MAX data types
Convert hierarchyid to MAX data types
Convert spatial to MAX data types
Copy extended properties
Limitations to be determined
Copy collation
Execution in a serialized transaction of the SP

Examples
Create a publication and a push subscription. For more information, see:
Create a Publication
Create a Push Subscription by using the server name as the subscriber (for example
N'azuresqldbdns.database.windows.net' ) and the Azure SQL Database name as the destination
database (for example AdventureWorks ).

See Also
Transactional replication
Create a Publication
Create a Push Subscription
Types of Replication
Monitoring (Replication)
Initialize a Subscription
Automate the replication of schema changes in
Azure SQL Data Sync
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


SQL Data Sync lets users synchronize data between databases in Azure SQL Database and SQL Server instances
in one direction or in both directions. One of the current limitations of SQL Data Sync is a lack of support for the
replication of schema changes. Every time you change the table schema, you need to apply the changes
manually on all endpoints, including the hub and all members, and then update the sync schema.
This article introduces a solution to automatically replicate schema changes to all SQL Data Sync endpoints.
1. This solution uses a DDL trigger to track schema changes.
2. The trigger inserts the schema change commands in a tracking table.
3. This tracking table is synced to all endpoints using the Data Sync service.
4. DML triggers after insertion are used to apply the schema changes on the other endpoints.
This article uses ALTER TABLE as an example of a schema change, but this solution also works for other types of
schema changes.

IMPORTANT
We recommend that you read this article carefully, especially the sections about Troubleshooting and Other
considerations, before you start to implement automated schema change replication in your sync environment. We also
recommend that you read Sync data across multiple cloud and on-premises databases with SQL Data Sync. Some
database operations may break the solution described in this article. Additional domain knowledge of SQL Server and
Transact-SQL may be required to troubleshoot those issues.

Set up automated schema change replication


Create a table to track schema changes
Create a table to track schema changes in all databases in the sync group:
CREATE TABLE SchemaChanges (
ID bigint IDENTITY(1,1) PRIMARY KEY,
SqlStmt nvarchar(max),
[Description] nvarchar(max)
)

This table has an identity column to track the order of schema changes. You can add more fields to log more
information if needed.
Create a table to track the history of schema changes
On all endpoints, create a table to track the ID of the most recently applied schema change command.

CREATE TABLE SchemaChangeHistory (


LastAppliedId bigint PRIMARY KEY
)
GO

INSERT INTO SchemaChangeHistory VALUES (0)

Create an ALTER TABLE DDL trigger in the database where schema changes are made
Create a DDL trigger for ALTER TABLE operations. You only need to create this trigger in the database where
schema changes are made. To avoid conflicts, only allow schema changes in one database in a sync group.

CREATE TRIGGER AlterTableDDLTrigger


ON DATABASE
FOR ALTER_TABLE
AS

-- You can add your own logic to filter ALTER TABLE commands instead of replicating all of them.

IF NOT (EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(512)') like 'DataSync')

INSERT INTO SchemaChanges (SqlStmt, Description)


VALUES (EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'nvarchar(max)'), 'From DDL
trigger')

The trigger inserts a record in the schema change tracking table for each ALTER TABLE command. This example
adds a filter to avoid replicating schema changes made under schema DataSync , because these are most likely
made by the Data Sync service. Add more filters if you only want to replicate certain types of schema changes.
You can also add more triggers to replicate other types of schema changes. For example, create
CREATE_PROCEDURE, ALTER_PROCEDURE and DROP_PROCEDURE triggers to replicate changes to stored
procedures.
Create a trigger on other endpoints to apply schema changes during insertion
This trigger executes the schema change command when it is synced to other endpoints. You need to create this
trigger on all the endpoints, except the one where schema changes are made (that is, in the database where the
DDL trigger AlterTableDDLTrigger is created in the previous step).
CREATE TRIGGER SchemaChangesTrigger
ON SchemaChanges
AFTER INSERT
AS
DECLARE @lastAppliedId bigint
DECLARE @id bigint
DECLARE @sqlStmt nvarchar(max)
SELECT TOP 1 @lastAppliedId=LastAppliedId FROM SchemaChangeHistory
SELECT TOP 1 @id = id, @SqlStmt = SqlStmt FROM SchemaChanges WHERE id > @lastAppliedId ORDER BY id
IF (@id = @lastAppliedId + 1)
BEGIN
EXEC sp_executesql @SqlStmt
UPDATE SchemaChangeHistory SET LastAppliedId = @id
WHILE (1 = 1)
BEGIN
SET @id = @id + 1
IF exists (SELECT id FROM SchemaChanges WHERE ID = @id)
BEGIN
SELECT @sqlStmt = SqlStmt FROM SchemaChanges WHERE ID = @id
EXEC sp_executesql @SqlStmt
UPDATE SchemaChangeHistory SET LastAppliedId = @id
END
ELSE
BREAK;
END
END

This trigger runs after the insertion and checks whether the current command should run next. The code logic
ensures that no schema change statement is skipped, and all changes are applied even if the insertion is out of
order.
Sync the schema change tracking table to all endpoints
You can sync the schema change tracking table to all endpoints using the existing sync group or a new sync
group. Make sure the changes in the tracking table can be synced to all endpoints, especially when you're using
one-direction sync.
Don't sync the schema change history table, since that table maintains different state on different endpoints.
Apply the schema changes in a sync group
Only schema changes made in the database where the DDL trigger is created are replicated. Schema changes
made in other databases are not replicated.
After the schema changes are replicated to all endpoints, you also need to take extra steps to update the sync
schema to start or stop syncing the new columns.
Add new columns
1. Make the schema change.
2. Avoid any data change where the new columns are involved until you've completed the step that creates
the trigger.
3. Wait until the schema changes are applied to all endpoints.
4. Refresh the database schema and add the new column to the sync schema.
5. Data in the new column is synced during next sync operation.
Remove columns
1. Remove the columns from the sync schema. Data Sync stops syncing data in these columns.
2. Make the schema change.
3. Refresh the database schema.
Update data types
1. Make the schema change.
2. Wait until the schema changes are applied to all endpoints.
3. Refresh the database schema.
4. If the new and old data types are not fully compatible - for example, if you change from int to bigint -
sync may fail before the steps that create the triggers are completed. Sync succeeds after a retry.
Rename columns or tables
Renaming columns or tables makes Data Sync stop working. Create a new table or column, backfill the data, and
then delete the old table or column instead of renaming.
Other types of schema changes
For other types of schema changes - for example, creating stored procedures or dropping an index- updating
the sync schema is not required.

Troubleshoot automated schema change replication


The replication logic described in this article stops working in some situations- for example, if you made a
schema change in an on-premises database which is not supported in Azure SQL Database. In that case, syncing
the schema change tracking table fails. You need fix this problem manually:
1. Disable the DDL trigger and avoid any further schema changes until the issue is fixed.
2. In the endpoint database where the issue is happening, disable the AFTER INSERT trigger on the endpoint
where the schema change can't be made. This action allows the schema change command to be synced.
3. Trigger sync to sync the schema change tracking table.
4. In the endpoint database where the issue is happening, query the schema change history table to get the
ID of last applied schema change command.
5. Query the schema change tracking table to list all the commands with an ID greater than the ID value you
retrieved in the previous step.
a. Ignore those commands that can't be executed in the endpoint database. You need to deal with the
schema inconsistency. Revert the original schema changes if the inconsistency impacts your application.
b. Manually apply those commands that should be applied.
6. Update the schema change history table and set the last applied ID to the correct value.
7. Double-check whether the schema is up-to-date.
8. Re-enable the AFTER INSERT trigger disabled in the second step.
9. Re-enable the DDL trigger disabled in the first step.
If you want to clean up the records in the schema change tracking table, use DELETE instead of TRUNCATE.
Never reseed the identity column in schema change tracking table by using DBCC CHECKIDENT. You can create
new schema change tracking tables and update the table name in the DDL trigger if reseeding is required.

Other Considerations
Database users who configure the hub and member databases need to have enough permission to
execute the schema change commands.
You can add more filters in the DDL trigger to only replicate schema change in selected tables or
operations.
You can only make schema changes in the database where the DDL trigger is created.
If you are making a change in a SQL Server database, make sure the schema change is supported in
Azure SQL Database.
If schema changes are made in databases other than the database where the DDL trigger is created, the
changes are not replicated. To avoid this issue, you can create DDL triggers to block changes on other
endpoints.
If you need to change the schema of the schema change tracking table, disable the DDL trigger before
you make the change, and then manually apply the change to all endpoints. Updating the schema in an
AFTER INSERT trigger on the same table does not work.
Don't reseed the identity column by using DBCC CHECKIDENT.
Don't use TRUNCATE to clean up data in the schema change tracking table.

Next steps
For more info about SQL Data Sync, see:
Overview - Sync data across multiple cloud and on-premises databases with Azure SQL Data Sync
Set up Data Sync
In the portal - Tutorial: Set up SQL Data Sync to sync data between Azure SQL Database and SQL
Server
With PowerShell
Use PowerShell to sync between multiple databases in Azure SQL Database
Use PowerShell to sync between a database in Azure SQL Database and a database in a SQL
Server instance
Data Sync Agent - Data Sync Agent for Azure SQL Data Sync
Best practices - Best practices for Azure SQL Data Sync
Monitor - Monitor SQL Data Sync with Azure Monitor logs
Troubleshoot - Troubleshoot issues with Azure SQL Data Sync
Update the sync schema
With PowerShell - Use PowerShell to update the sync schema in an existing sync group
Upgrade an app to use the latest elastic database
client library
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


New versions of the Elastic Database client library are available through NuGet and the NuGet Package Manager
interface in Visual Studio. Upgrades contain bug fixes and support for new capabilities of the client library.
For the latest version: Go to Microsoft.Azure.SqlDatabase.ElasticScale.Client.
Rebuild your application with the new library, as well as change your existing Shard Map Manager metadata
stored in your databases in Azure SQL Database to support new features.
Performing these steps in order ensures that old versions of the client library are no longer present in your
environment when metadata objects are updated, which means that old-version metadata objects won’t be
created after upgrade.

Upgrade steps
1. Upgrade your applications. In Visual Studio, download and reference the latest client library version into
all of your development projects that use the library; then rebuild and deploy.
In your Visual Studio solution, select Tools --> NuGet Package Manager --> Manage NuGet Packages
for Solution .
(Visual Studio 2013) In the left panel, select Updates , and then select the Update button on the package
Azure SQL Database Elastic Scale Client Librar y that appears in the window.
(Visual Studio 2015) Set the Filter box to Upgrade available . Select the package to update, and click the
Update button.
(Visual Studio 2017) At the top of the dialog, select Updates . Select the package to update, and click the
Update button.
Build and Deploy.
2. Upgrade your scripts. If you are using PowerShell scripts to manage shards, download the new library
version and copy it into the directory from which you execute scripts.
3. Upgrade your split-merge ser vice. If you use the elastic database split-merge tool to reorganize sharded
data, download and deploy the latest version of the tool. Detailed upgrade steps for the Service can be found
here.
4. Upgrade your Shard Map Manager databases . Upgrade the metadata supporting your Shard Maps in
Azure SQL Database. There are two ways you can accomplish this, using PowerShell or C#. Both options are
shown below.
Option 1: Upgrade metadata using PowerShell
1. Download the latest command-line utility for NuGet from here and save to a folder.
2. Open a Command Prompt, navigate to the same folder, and issue the command:
nuget install Microsoft.Azure.SqlDatabase.ElasticScale.Client
3. Navigate to the subfolder containing the new client DLL version you have just downloaded, for example:
cd .\Microsoft.Azure.SqlDatabase.ElasticScale.Client.1.0.0\lib\net45
4. Download the elastic database client upgrade script from the Script Center, and save it into the same folder
containing the DLL.
5. From that folder, run “PowerShell .\upgrade.ps1” from the command prompt and follow the prompts.
Option 2: Upgrade metadata using C#
Alternatively, create a Visual Studio application that opens your ShardMapManager, iterates over all shards, and
performs the metadata upgrade by calling the methods UpgradeLocalStore and UpgradeGlobalStore as in this
example:

ShardMapManager smm =
ShardMapManagerFactory.GetSqlShardMapManager
(connStr, ShardMapManagerLoadPolicy.Lazy);
smm.UpgradeGlobalStore();

foreach (ShardLocation loc in


smm.GetDistinctShardLocations())
{
smm.UpgradeLocalStore(loc);
}

These techniques for metadata upgrades can be applied multiple times without harm. For example, if an older
client version inadvertently creates a shard after you have already updated, you can run upgrade again across
all shards to ensure that the latest metadata version is present throughout your infrastructure.
Note: New versions of the client library published to-date continue to work with prior versions of the Shard
Map Manager metadata on Azure SQL Database, and vice-versa. However to take advantage of some of the new
features in the latest client, metadata needs to be upgraded. Note that metadata upgrades will not affect any
user-data or application-specific data, only objects created and used by the Shard Map Manager. And
applications continue to operate through the upgrade sequence described above.

Elastic database client version history


For version history, go to Microsoft.Azure.SqlDatabase.ElasticScale.Client

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Get started with Elastic Database Tools
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This document introduces you to the developer experience for the elastic database client library by helping you
run a sample app. The sample app creates a simple sharded application and explores key capabilities of the
Elastic Database Tools feature of Azure SQL Database. It focuses on use cases for shard map management, data-
dependent routing, and multi-shard querying. The client library is available for .NET as well as Java.

Elastic Database Tools for Java


Prerequisites
A Java Developer Kit (JDK), version 1.8 or later
Maven
SQL Database or a local SQL Server instance
Download and run the sample app
To build the JAR files and get started with the sample project, do the following:
1. Clone the GitHub repository containing the client library, along with the sample app.
2. Edit the ./sample/src/main/resources/resource.properties file to set the following:
TEST_CONN_USER
TEST_CONN_PASSWORD
TEST_CONN_SERVER_NAME
3. To build the sample project, in the ./sample directory, run the following command:

mvn install

4. To start the sample project, in the ./sample directory, run the following command:

mvn -q exec:java "-


Dexec.mainClass=com.microsoft.azure.elasticdb.samples.elasticscalestarterkit.Program"

5. To learn more about the client library capabilities, experiment with the various options. Feel free to
explore the code to learn about the sample app implementation.
Congratulations! You have successfully built and run your first sharded application by using Elastic Database
Tools on Azure SQL Database. Use Visual Studio or SQL Server Management Studio to connect to your database
and take a quick look at the shards that the sample created. You will notice new sample shard databases and a
shard map manager database that the sample has created.
To add the client library to your own Maven project, add the following dependency in your POM file:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>elastic-db-tools</artifactId>
<version>1.0.0</version>
</dependency>

Elastic Database Tools for .NET


Prerequisites
Visual Studio 2012 or later with C#. Download a free version at Visual Studio Downloads.
NuGet 2.7 or later. To get the latest version, see Installing NuGet.
Download and run the sample app
To install the library, go to Microsoft.Azure.SqlDatabase.ElasticScale.Client. The library is installed with the
sample app that's described in the following section.
To download and run the sample, follow these steps:
1. Download the Elastic DB Tools for Azure SQL - Getting Started sample. Unzip the sample to a location that
you choose.
2. To create a project, open the ElasticDatabaseTools.sln solution from the elastic-db-tools-master directory.
3. Set the ElasticScaleStarterKit project as the Startup Project.
4. In the ElasticScaleStarterKit project, open the App.config file. Then follow the instructions in the file to add
your server name and your sign in information (username and password).
5. Build and run the application. When you are prompted, enable Visual Studio to restore the NuGet
packages of the solution. This action downloads the latest version of the elastic database client library
from NuGet.
6. To learn more about the client library capabilities, experiment with the various options. Note the steps
that the application takes in the console output, and feel free to explore the code behind the scenes.

Congratulations! You have successfully built and run your first sharded application by using Elastic Database
Tools on SQL Database. Use Visual Studio or SQL Server Management Studio to connect to your database and
take a quick look at the shards that the sample created. You will notice new sample shard databases and a shard
map manager database that the sample has created.

IMPORTANT
We recommend that you always use the latest version of Management Studio so that you stay synchronized with updates
to Azure and SQL Database. Update SQL Server Management Studio.

Key pieces of the code sample


Managing shards and shard maps : The code illustrates how to work with shards, ranges, and
mappings in the ShardManagementUtils.cs file. For more information, see Scale out databases with the
shard map manager.
Data-dependent routing : Routing of transactions to the right shard is shown in the
DataDependentRoutingSample.cs file. For more information, see Data-dependent routing.
Quer ying over multiple shards : Querying across shards is illustrated in the
MultiShardQuerySample.cs file. For more information, see Multi-shard querying.
Adding empty shards : The iterative adding of new empty shards is performed by the code in the
CreateShardSample.cs file. For more information, see Scale out databases with the shard map manager.

Other elastic scale operations


Splitting an existing shard : The capability to split shards is provided by the split-merge tool. For more
information, see Moving data between scaled-out cloud databases.
Merging existing shards : Shard merges are also performed by using the split-merge tool. For more
information, see Moving data between scaled-out cloud databases.
Cost
The Elastic Database Tools library is free. When you use Elastic Database Tools, you incur no additional charges
beyond the cost of your Azure usage.
For example, the sample application creates new databases. The cost of this capability depends on the SQL
Database edition you choose and the Azure usage of your application.
For pricing information, see SQL Database pricing details.

Next steps
For more information about Elastic Database Tools, see the following articles:
Code samples:
Elastic Database Tools (.NET, Java)
Elastic Database Tools for Azure SQL - Entity Framework Integration
Shard Elasticity on Script Center
Blog: Elastic Scale announcement
Discussion forum: Microsoft Q&A question page for Azure SQL Database
To measure performance: Performance counters for shard map manager
Report across scaled-out cloud databases (preview)
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


You can create reports from multiple databases from a single connection point using an elastic query. The
databases must be horizontally partitioned (also known as "sharded").
If you have an existing database, see Migrating existing databases to scaled-out databases.
To understand the SQL objects needed to query, see Query across horizontally partitioned databases.

Prerequisites
Download and run the Getting started with Elastic Database tools sample.

Create a shard map manager using the sample app


Here you will create a shard map manager along with several shards, followed by insertion of data into the
shards. If you happen to already have shards setup with sharded data in them, you can skip the following steps
and move to the next section.
1. Build and run the Getting star ted with Elastic Database tools sample application by following the
steps in the article section Download and run the sample app. Once you finish all the steps, you will see
the following command prompt:

2. In the command window, type "1" and press Enter . This creates the shard map manager, and adds two
shards to the server. Then type "3" and press Enter ; repeat the action four times. This inserts sample data
rows in your shards.
3. The Azure portal should show three new databases in your server:
At this point, cross-database queries are supported through the Elastic Database client libraries. For
example, use option 4 in the command window. The results from a multi-shard query are always a
UNION ALL of the results from all shards.
In the next section, we create a sample database endpoint that supports richer querying of the data
across shards.

Create an elastic query database


1. Open the Azure portal and log in.
2. Create a new database in Azure SQL Database in the same server as your shard setup. Name the
database "ElasticDBQuery."
NOTE
you can use an existing database. If you can do so, it must not be one of the shards that you would like to
execute your queries on. This database will be used for creating the metadata objects for an elastic database
query.

Create database objects


Database -scoped master key and credentials
These are used to connect to the shard map manager and the shards:
1. Open SQL Server Management Studio or SQL Server Data Tools in Visual Studio.
2. Connect to ElasticDBQuery database and execute the following T-SQL commands:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<master_key_password>';

CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred


WITH IDENTITY = '<username>',
SECRET = '<password>';

"username" and "password" should be the same as login information used in step 3 of section Download
and run the sample app in the Getting star ted with Elastic Database tools article.
External data sources
To create an external data source, execute the following command on the ElasticDBQuery database:

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH


(TYPE = SHARD_MAP_MANAGER,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'ElasticScaleStarterKit_ShardMapManagerDb',
CREDENTIAL = ElasticDBQueryCred,
SHARD_MAP_NAME = 'CustomerIDShardMap'
) ;

"CustomerIDShardMap" is the name of the shard map, if you created the shard map and shard map manager
using the elastic database tools sample. However, if you used your custom setup for this sample, then it should
be the shard map name you chose in your application.
External tables
Create an external table that matches the Customers table on the shards by executing the following command
on ElasticDBQuery database:

CREATE EXTERNAL TABLE [dbo].[Customers]


( [CustomerId] [int] NOT NULL,
[Name] [nvarchar](256) NOT NULL,
[RegionId] [int] NOT NULL)
WITH
( DATA_SOURCE = MyElasticDBQueryDataSrc,
DISTRIBUTION = SHARDED([CustomerId])
) ;

Execute a sample elastic database T-SQL query


Once you have defined your external data source and your external tables you can now use full T-SQL over your
external tables.
Execute this query on the ElasticDBQuery database:

select count(CustomerId) from [dbo].[Customers]

You will notice that the query aggregates results from all the shards and gives the following output:

Import elastic database query results to Excel


You can import the results from of a query to an Excel file.
1. Launch Excel 2013.
2. Navigate to the Data ribbon.
3. Click From Other Sources and click From SQL Ser ver .

4. In the Data Connection Wizard type the server name and login credentials. Then click Next .
5. In the dialog box Select the database that contains the data you want , select the ElasticDBQuer y
database.
6. Select the Customers table in the list view and click Next . Then click Finish .
7. In the Impor t Data form, under Select how you want to view this data in your workbook , select
Table and click OK .
All the rows from Customers table, stored in different shards populate the Excel sheet.
You can now use Excel’s powerful data visualization functions. You can use the connection string with your
server name, database name and credentials to connect your BI and data integration tools to the elastic query
database. Make sure that SQL Server is supported as a data source for your tool. You can refer to the elastic
query database and external tables just like any other SQL Server database and SQL Server tables that you
would connect to with your tool.
Cost
There is no additional charge for using the Elastic Database Query feature.
For pricing information see SQL Database Pricing Details.
Next steps
For an overview of elastic query, see Elastic query overview.
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Multi-shard querying using elastic database tools
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

Overview
With the Elastic Database tools, you can create sharded database solutions. Multi-shard quer ying is used for
tasks such as data collection/reporting that require running a query that stretches across several shards.
(Contrast this to data-dependent routing, which performs all work on a single shard.)
1. Get a RangeShardMap (Java, .NET) or ListShardMap (Java, .NET) using the Tr yGetRangeShardMap
(Java, .NET), the Tr yGetListShardMap (Java, .NET), or the GetShardMap (Java, .NET) method. See
Constructing a ShardMapManager and Get a RangeShardMap or ListShardMap.
2. Create a MultiShardConnection (Java, .NET) object.
3. Create a MultiShardStatement or MultiShardCommand (Java, .NET).
4. Set the CommandText proper ty (Java, .NET) to a T-SQL command.
5. Execute the command by calling the ExecuteQuer yAsync or ExecuteReader (Java, .NET) method.
6. View the results using the MultiShardResultSet or MultiShardDataReader (Java, .NET) class.

Example
The following code illustrates the usage of multi-shard querying using a given ShardMap named
myShardMap.

using (MultiShardConnection conn = new MultiShardConnection(myShardMap.GetShards(),


myShardConnectionString))
{
using (MultiShardCommand cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT c1, c2, c3 FROM ShardedTable";
cmd.CommandType = CommandType.Text;
cmd.ExecutionOptions = MultiShardExecutionOptions.IncludeShardNameColumn;
cmd.ExecutionPolicy = MultiShardExecutionPolicy.PartialResults;

using (MultiShardDataReader sdr = cmd.ExecuteReader())


{
while (sdr.Read())
{
var c1Field = sdr.GetString(0);
var c2Field = sdr.GetFieldValue<int>(1);
var c3Field = sdr.GetFieldValue<Int64>(2);
}
}
}
}

A key difference is the construction of multi-shard connections. Where SqlConnection operates on an


individual database, the MultiShardConnection takes a collection of shards as its input. Populate the
collection of shards from a shard map. The query is then executed on the collection of shards using UNION
ALL semantics to assemble a single overall result. Optionally, the name of the shard where the row originates
from can be added to the output using the ExecutionOptions property on command.
Note the call to myShardMap.GetShards() . This method retrieves all shards from the shard map and provides
an easy way to run a query across all relevant databases. The collection of shards for a multi-shard query can be
refined further by performing a LINQ query over the collection returned from the call to
myShardMap.GetShards() . In combination with the partial results policy, the current capability in multi-shard
querying has been designed to work well for tens up to hundreds of shards.
A limitation with multi-shard querying is currently the lack of validation for shards and shardlets that are
queried. While data-dependent routing verifies that a given shard is part of the shard map at the time of
querying, multi-shard queries do not perform this check. This can lead to multi-shard queries running on
databases that have been removed from the shard map.

Multi-shard queries and split-merge operations


Multi-shard queries do not verify whether shardlets on the queried database are participating in ongoing split-
merge operations. (See Scaling using the Elastic Database split-merge tool.) This can lead to inconsistencies
where rows from the same shardlet show for multiple databases in the same multi-shard query. Be aware of
these limitations and consider draining ongoing split-merge operations and changes to the shard map while
performing multi-shard queries.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Deploy a split-merge service to move data between
sharded databases
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The split-merge tool lets you move data between sharded databases. See Moving data between scaled-out cloud
databases

Download the Split-Merge packages


1. Download the latest NuGet version from NuGet.
2. Open a command prompt and navigate to the directory where you downloaded nuget.exe. The download
includes PowerShell commands.
3. Download the latest Split-Merge package into the current directory with the below command:

nuget install Microsoft.Azure.SqlDatabase.ElasticScale.Service.SplitMerge

The files are placed in a directory named


Microsoft.Azure.SqlDatabase.ElasticScale.Ser vice.SplitMerge.x.x.xxx.x where x.x.xxx.x reflects the version
number. Find the split-merge Service files in the content\splitmerge\ser vice sub-directory, and the Split-
Merge PowerShell scripts (and required client dlls) in the content\splitmerge\powershell sub-directory.

Prerequisites
1. Create an Azure SQL Database database that will be used as the split-merge status database. Go to the
Azure portal. Create a new SQL Database . Give the database a name and create a new administrator
and password. Be sure to record the name and password for later use.
2. Ensure that your server allows Azure Services to connect to it. In the portal, in the Firewall Settings ,
ensure the Allow access to Azure Ser vices setting is set to On . Click the "save" icon.
3. Create an Azure Storage account for diagnostics output.
4. Create an Azure Cloud Service for your Split-Merge service.

Configure your Split-Merge service


Split-Merge service configuration
1. In the folder into which you downloaded the Split-Merge assemblies, create a copy of the
ServiceConfiguration.Template.cscfg file that shipped alongside SplitMergeService.cspkg and rename it
ServiceConfiguration.cscfg.
2. Open ServiceConfiguration.cscfg in a text editor such as Visual Studio that validates inputs such as the
format of certificate thumbprints.
3. Create a new database or choose an existing database to serve as the status database for Split-Merge
operations and retrieve the connection string of that database.
IMPORTANT
At this time, the status database must use the Latin collation (SQL_Latin1_General_CP1_CI_AS). For more
information, see Windows Collation Name (Transact-SQL).

With Azure SQL Database, the connection string typically is of the form:
Server=<serverName>.database.windows.net; Database=<databaseName>;User ID=<userId>; Password=
<password>; Encrypt=True; Connection Timeout=30

4. Enter this connection string in the .cscfg file in both the SplitMergeWeb and SplitMergeWorker role
sections in the ElasticScaleMetadata setting.
5. For the SplitMergeWorker role, enter a valid connection string to Azure storage for the
WorkerRoleSynchronizationStorageAccountConnectionString setting.
Configure security
For detailed instructions to configure the security of the service, refer to the Split-Merge security configuration.
For the purposes of a simple test deployment for this tutorial, a minimal set of configuration steps will be
performed to get the service up and running. These steps enable only the one machine/account executing them
to communicate with the service.
Create a self-signed certificate
Create a new directory and from this directory execute the following command using a Developer Command
Prompt for Visual Studio window:

makecert ^
-n "CN=*.cloudapp.net" ^
-r -cy end -sky exchange -eku "1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2" ^
-a sha256 -len 2048 ^
-sr currentuser -ss root ^
-sv MyCert.pvk MyCert.cer

You are asked for a password to protect the private key. Enter a strong password and confirm it. You are then
prompted for the password to be used once more after that. Click Yes at the end to import it to the Trusted
Certification Authorities Root store.
Create a PFX file
Execute the following command from the same window where makecert was executed; use the same password
that you used to create the certificate:

pvk2pfx -pvk MyCert.pvk -spc MyCert.cer -pfx MyCert.pfx -pi <password>

Import the client certificate into the personal store


1. In Windows Explorer, double-click MyCert.pfx.
2. In the Cer tificate Impor t Wizard select Current User and click Next .
3. Confirm the file path and click Next .
4. Type the password, leave Include all extended proper ties checked and click Next .
5. Leave Automatically select the cer tificate store[…] checked and click Next .
6. Click Finish and OK .
Upload the PFX file to the cloud service
1. Go to the Azure portal.
2. Select Cloud Ser vices .
3. Select the cloud service you created above for the Split/Merge service.
4. Click Cer tificates on the top menu.
5. Click Upload in the bottom bar.
6. Select the PFX file and enter the same password as above.
7. Once completed, copy the certificate thumbprint from the new entry in the list.
Update the service configuration file
Paste the certificate thumbprint copied above into the thumbprint/value attribute of these settings. For the
worker role:

<Setting name="DataEncryptionPrimaryCertificateThumbprint" value="" />


<Certificate name="DataEncryptionPrimary" thumbprint="" thumbprintAlgorithm="sha1" />

For the web role:

<Setting name="AdditionalTrustedRootCertificationAuthorities" value="" />


<Setting name="AllowedClientCertificateThumbprints" value="" />
<Setting name="DataEncryptionPrimaryCertificateThumbprint" value="" />
<Certificate name="SSL" thumbprint="" thumbprintAlgorithm="sha1" />
<Certificate name="CA" thumbprint="" thumbprintAlgorithm="sha1" />
<Certificate name="DataEncryptionPrimary" thumbprint="" thumbprintAlgorithm="sha1" />

Please note that for production deployments separate certificates should be used for the CA, for encryption, the
Server certificate and client certificates. For detailed instructions on this, see Security Configuration.

Deploy your service


1. Go to the Azure portal
2. Select the cloud service that you created earlier.
3. Click Over view .
4. Choose the staging environment, then click Upload .
5. In the dialog box, enter a deployment label. For both 'Package' and 'Configuration', click 'From Local' and
choose the SplitMergeService.cspkg file and your cscfg file that you configured earlier.
6. Ensure that the checkbox labeled Deploy even if one or more roles contain a single instance is
checked.
7. Hit the tick button in the bottom right to begin the deployment. Expect it to take a few minutes to complete.

Troubleshoot the deployment


If your web role fails to come online, it is likely a problem with the security configuration. Check that the
TLS/SSL is configured as described above.
If your worker role fails to come online, but your web role succeeds, it is most likely a problem connecting to the
status database that you created earlier.
Make sure that the connection string in your cscfg is accurate.
Check that the server and database exist, and that the user id and password are correct.
For Azure SQL Database, the connection string should be of the form:
Server=<serverName>.database.windows.net; Database=<databaseName>;User ID=<user>; Password=<password>;
Encrypt=True; Connection Timeout=30
Ensure that the server name does not begin with https:// .
Ensure that your server allows Azure Services to connect to it. To do this, open your database in the
portal and ensure that the Allow access to Azure Ser vices setting is set to On **.

Test the service deployment


Connect with a web browser
Determine the web endpoint of your Split-Merge service. You can find this in the portal by going to the
Over view of your cloud service and looking under Site URL on the right side. Replace http:// with https://
since the default security settings disable the HTTP endpoint. Load the page for this URL into your browser.
Test with PowerShell scripts
The deployment and your environment can be tested by running the included sample PowerShell scripts.

IMPORTANT
The sample scripts run on PowerShell 5.1. They do not currently run on PowerShell 6 or later.

The script files included are:


1. SetupSampleSplitMergeEnvironment.ps1 - sets up a test data tier for Split/Merge (see table below for
detailed description)
2. ExecuteSampleSplitMerge.ps1 - executes test operations on the test data tier (see table below for detailed
description)
3. GetMappings.ps1 - top-level sample script that prints out the current state of the shard mappings.
4. ShardManagement.psm1 - helper script that wraps the ShardManagement API
5. SqlDatabaseHelpers.psm1 - helper script for creating and managing databases in SQL Database

P O W ERSH EL L F IL E ST EP S

SET UP SA M P L ESP L IT M ERGEEN VIRO N M EN T. P S1 1. Creates a shard map manager database

2. Creates 2 shard databases.

3. Creates a shard map for those databases (deletes any


existing shard maps on those databases).

4. Creates a small sample table in both the shards, and


populates the table in one of the shards.

5. Declares the SchemaInfo for the sharded table.

P O W ERSH EL L F IL E ST EP S

EXEC UT ESA M P L ESP L IT M ERGE. P S1 1. Sends a split request to the Split-Merge Service web
frontend, which splits half the data from the first shard
to the second shard.

2. Polls the web frontend for the split request status and
waits until the request completes.
3. Sends a merge request to the Split-Merge Service web
frontend, which moves the data from the second shard
back to the first shard.

4. Polls the web frontend for the merge request status


and waits until the request completes.

Use PowerShell to verify your deployment


1. Open a new PowerShell window and navigate to the directory where you downloaded the Split-Merge
package, and then navigate into the "powershell" directory.
2. Create a server (or choose an existing server) where the shard map manager and shards will be created.

NOTE
The SetupSampleSplitMergeEnvironment.ps1 script creates all these databases on the same server by default to
keep the script simple. This is not a restriction of the Split-Merge Service itself.

A SQL authentication login with read/write access to the DBs will be needed for the Split-Merge service to
move data and update the shard map. Since the Split-Merge Service runs in the cloud, it does not
currently support Integrated Authentication.
Make sure the server is configured to allow access from the IP address of the machine running these
scripts. You can find this setting under SQL server / Firewalls and virtual networks / Client IP addresses.
3. Execute the SetupSampleSplitMergeEnvironment.ps1 script to create the sample environment.
Running this script will wipe out any existing shard map management data structures on the shard map
manager database and the shards. It may be useful to rerun the script if you wish to re-initialize the shard
map or shards.
Sample command line:

.\SetupSampleSplitMergeEnvironment.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd' -ShardMapManagerServerName
'abcdefghij.database.windows.net'

4. Execute the Getmappings.ps1 script to view the mappings that currently exist in the sample environment.

.\GetMappings.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd' -ShardMapManagerServerName
'abcdefghij.database.windows.net'

5. Execute the ExecuteSampleSplitMerge.ps1 script to execute a split operation (moving half the data on the
first shard to the second shard) and then a merge operation (moving the data back onto the first shard). If
you configured TLS and left the http endpoint disabled, ensure that you use the https:// endpoint instead.
Sample command line:
.\ExecuteSampleSplitMerge.ps1
-UserName 'mysqluser' -Password 'MySqlPassw0rd'
-ShardMapManagerServerName 'abcdefghij.database.windows.net'
-SplitMergeServiceEndpoint 'https://mysplitmergeservice.cloudapp.net'
-CertificateThumbprint '0123456789abcdef0123456789abcdef01234567'

If you receive the below error, it is most likely a problem with your Web endpoint's certificate. Try
connecting to the Web endpoint with your favorite Web browser and check if there is a certificate error.
Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for
the SSL/TLSsecure channel.

If it succeeded, the output should look like the below:


> .\ExecuteSampleSplitMerge.ps1 -UserName 'mysqluser' -Password 'MySqlPassw0rd' -
ShardMapManagerServerName 'abcdefghij.database.windows.net' -SplitMergeServiceEndpoint
'http://mysplitmergeservice.cloudapp.net' -CertificateThumbprint
0123456789abcdef0123456789abcdef01234567
> Sending split request
> Began split operation with id dc68dfa0-e22b-4823-886a-9bdc903c80f3
> Polling split-merge request status. Press Ctrl-C to end
> Progress: 0% | Status: Queued | Details: [Informational] Queued request
> Progress: 5% | Status: Starting | Details: [Informational] Starting split-merge state machine for
request.
> Progress: 5% | Status: Starting | Details: [Informational] Performing data consistency checks on
target shards.
> Progress: 20% | Status: CopyingReferenceTables | Details: [Informational] Moving reference tables
from source to target shard.
> Progress: 20% | Status: CopyingReferenceTables | Details: [Informational] Waiting for reference
tables copy completion.
> Progress: 20% | Status: CopyingReferenceTables | Details: [Informational] Moving reference tables
from source to target shard.
> Progress: 44% | Status: CopyingShardedTables | Details: [Informational] Moving key range [100:110)
of Sharded tables
> Progress: 44% | Status: CopyingShardedTables | Details: [Informational] Successfully copied key
range [100:110) for table [dbo].[MyShardedTable]
> ...
> ...
> Progress: 90% | Status: Completing | Details: [Informational] Successfully deleted shardlets in
table [dbo].[MyShardedTable].
> Progress: 90% | Status: Completing | Details: [Informational] Deleting any temp tables that were
created while processing the request.
> Progress: 100% | Status: Succeeded | Details: [Informational] Successfully processed request.
> Sending merge request
> Began merge operation with id 6ffc308f-d006-466b-b24e-857242ec5f66
> Polling request status. Press Ctrl-C to end
> Progress: 0% | Status: Queued | Details: [Informational] Queued request
> Progress: 5% | Status: Starting | Details: [Informational] Starting split-merge state machine for
request.
> Progress: 5% | Status: Starting | Details: [Informational] Performing data consistency checks on
target shards.
> Progress: 20% | Status: CopyingReferenceTables | Details: [Informational] Moving reference tables
from source to target shard.
> Progress: 44% | Status: CopyingShardedTables | Details: [Informational] Moving key range [100:110)
of Sharded tables
> Progress: 44% | Status: CopyingShardedTables | Details: [Informational] Successfully copied key
range [100:110) for table [dbo].[MyShardedTable]
> ...
> ...
> Progress: 90% | Status: Completing | Details: [Informational] Successfully deleted shardlets in
table [dbo].[MyShardedTable].
> Progress: 90% | Status: Completing | Details: [Informational] Deleting any temp tables that were
created while processing the request.
> Progress: 100% | Status: Succeeded | Details: [Informational] Successfully processed request.
>

6. Experiment with other data types! All of these scripts take an optional -ShardKeyType parameter that
allows you to specify the key type. The default is Int32, but you can also specify Int64, Guid, or Binary.

Create requests
The service can be used either by using the web UI or by importing and using the SplitMerge.psm1 PowerShell
module which will submit your requests through the web role.
The service can move data in both sharded tables and reference tables. A sharded table has a sharding key
column and has different row data on each shard. A reference table is not sharded so it contains the same row
data on every shard. Reference tables are useful for data that does not change often and is used to JOIN with
sharded tables in queries.
In order to perform a split-merge operation, you must declare the sharded tables and reference tables that you
want to have moved. This is accomplished with the SchemaInfo API. This API is in the
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.Schema namespace.
1. For each sharded table, create a ShardedTableInfo object describing the table's parent schema name
(optional, defaults to "dbo"), the table name, and the column name in that table that contains the sharding
key.
2. For each reference table, create a ReferenceTableInfo object describing the table's parent schema name
(optional, defaults to "dbo") and the table name.
3. Add the above TableInfo objects to a new SchemaInfo object.
4. Get a reference to a ShardMapManager object, and call GetSchemaInfoCollection .
5. Add the SchemaInfo to the SchemaInfoCollection , providing the shard map name.
An example of this can be seen in the SetupSampleSplitMergeEnvironment.ps1 script.
The Split-Merge service does not create the target database (or schema for any tables in the database) for you.
They must be pre-created before sending a request to the service.

Troubleshooting
You may see the below message when running the sample PowerShell scripts:
Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for the
SSL/TLS secure channel.

This error means that your TLS/SSL certificate is not configured correctly. Please follow the instructions in
section 'Connecting with a web browser'.
If you cannot submit requests you may see this:
[Exception] System.Data.SqlClient.SqlException (0x80131904): Could not find stored procedure
'dbo.InsertRequest'.

In this case, check your configuration file, in particular the setting for
WorkerRoleSynchronizationStorageAccountConnectionString . This error typically indicates that the
worker role could not successfully initialize the metadata database on first use.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Split-merge security configuration
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


To use the Split/Merge service, you must correctly configure security. The service is part of the Elastic Scale
feature of Azure SQL Database. For more information, see Elastic Scale Split and Merge Service Tutorial.

Configuring certificates
Certificates are configured in two ways.
1. To Configure the TLS/SSL Certificate
2. To Configure Client Certificates

To obtain certificates
Certificates can be obtained from public Certificate Authorities (CAs) or from the Windows Certificate Service.
These are the preferred methods to obtain certificates.
If those options are not available, you can generate self-signed cer tificates .

Tools to generate certificates


makecert.exe
pvk2pfx.exe
To run the tools
From a Developer Command Prompt for Visual Studios, see Visual Studio Command Prompt
If installed, go to:

%ProgramFiles(x86)%\Windows Kits\x.y\bin\x86

Get the WDK from Windows 8.1: Download kits and tools

To configure the TLS/SSL certificate


A TLS/SSL certificate is required to encrypt the communication and authenticate the server. Choose the most
applicable of the three scenarios below, and execute all its steps:
Create a new self-signed certificate
1. Create a Self-Signed Certificate
2. Create PFX file for Self-Signed TLS/SSL Certificate
3. Upload TLS/SSL Certificate to Cloud Service
4. Update TLS/SSL Certificate in Service Configuration File
5. Import TLS/SSL Certification Authority
To use an existing certificate from the certificate store
1. Export TLS/SSL Certificate From Certificate Store
2. Upload TLS/SSL Certificate to Cloud Service
3. Update TLS/SSL Certificate in Service Configuration File
To use an existing certificate in a PFX file
1. Upload TLS/SSL Certificate to Cloud Service
2. Update TLS/SSL Certificate in Service Configuration File

To configure client certificates


Client certificates are required in order to authenticate requests to the service. Choose the most applicable of
the three scenarios below, and execute all its steps:
Turn off client certificates
1. Turn Off Client Certificate-Based Authentication
Issue new self-signed client certificates
1. Create a Self-Signed Certification Authority
2. Upload CA Certificate to Cloud Service
3. Update CA Certificate in Service Configuration File
4. Issue Client Certificates
5. Create PFX files for Client Certificates
6. Import Client Certificate
7. Copy Client Certificate Thumbprints
8. Configure Allowed Clients in the Service Configuration File
Use existing client certificates
1. Find CA Public Key
2. Upload CA Certificate to Cloud Service
3. Update CA Certificate in Service Configuration File
4. Copy Client Certificate Thumbprints
5. Configure Allowed Clients in the Service Configuration File
6. Configure Client Certificate Revocation Check

Allowed IP addresses
Access to the service endpoints can be restricted to specific ranges of IP addresses.

To configure encryption for the store


A certificate is required to encrypt the credentials that are stored in the metadata store. Choose the most
applicable of the three scenarios below, and execute all its steps:
Use a new self-signed certificate
1. Create a Self-Signed Certificate
2. Create PFX file for Self-Signed Encryption Certificate
3. Upload Encryption Certificate to Cloud Service
4. Update Encryption Certificate in Service Configuration File
Use an existing certificate from the certificate store
1. Export Encryption Certificate From Certificate Store
2. Upload Encryption Certificate to Cloud Service
3. Update Encryption Certificate in Service Configuration File
Use an existing certificate in a PFX file
1. Upload Encryption Certificate to Cloud Service
2. Update Encryption Certificate in Service Configuration File

The default configuration


The default configuration denies all access to the HTTP endpoint. This is the recommended setting, since the
requests to these endpoints may carry sensitive information like database credentials. The default configuration
allows all access to the HTTPS endpoint. This setting may be restricted further.
Changing the Configuration
The group of access control rules that apply to and endpoint are configured in the <EndpointAcls> section in
the ser vice configuration file .

<EndpointAcls>
<EndpointAcl role="SplitMergeWeb" endPoint="HttpIn" accessControl="DenyAll" />
<EndpointAcl role="SplitMergeWeb" endPoint="HttpsIn" accessControl="AllowAll" />
</EndpointAcls>

The rules in an access control group are configured in a <AccessControl name=""> section of the service
configuration file.
The format is explained in Network Access Control Lists documentation. For example, to allow only IPs in the
range 100.100.0.0 to 100.100.255.255 to access the HTTPS endpoint, the rules would look like this:

<AccessControl name="Retricted">
<Rule action="permit" description="Some" order="1" remoteSubnet="100.100.0.0/16"/>
<Rule action="deny" description="None" order="2" remoteSubnet="0.0.0.0/0" />
</AccessControl>
<EndpointAcls>
<EndpointAcl role="SplitMergeWeb" endPoint="HttpsIn" accessControl="Restricted" />
</EndpointAcls>

Denial of service prevention


There are two different mechanisms supported to detect and prevent Denial of Service attacks:
Restrict number of concurrent requests per remote host (off by default)
Restrict rate of access per remote host (on by default)
These are based on the features further documented in Dynamic IP Security in IIS. When changing this
configuration beware of the following factors:
The behavior of proxies and Network Address Translation devices over the remote host information
Each request to any resource in the web role is considered (for example, loading scripts, images, etc)

Restricting number of concurrent accesses


The settings that configure this behavior are:

<Setting name="DynamicIpRestrictionDenyByConcurrentRequests" value="false" />


<Setting name="DynamicIpRestrictionMaxConcurrentRequests" value="20" />

Change DynamicIpRestrictionDenyByConcurrentRequests to true to enable this protection.


Restricting rate of access
The settings that configure this behavior are:

<Setting name="DynamicIpRestrictionDenyByRequestRate" value="true" />


<Setting name="DynamicIpRestrictionMaxRequests" value="100" />
<Setting name="DynamicIpRestrictionRequestIntervalInMilliseconds" value="2000" />

Configuring the response to a denied request


The following setting configures the response to a denied request:

<Setting name="DynamicIpRestrictionDenyAction" value="AbortRequest" />

Refer to the documentation for Dynamic IP Security in IIS for other supported values.

Operations for configuring service certificates


This topic is for reference only. Follow the configuration steps outlined in:
Configure the TLS/SSL certificate
Configure client certificates

Create a self-signed certificate


Execute:

makecert ^
-n "CN=myservice.cloudapp.net" ^
-e MM/DD/YYYY ^
-r -cy end -sky exchange -eku "1.3.6.1.5.5.7.3.1" ^
-a sha256 -len 2048 ^
-sv MySSL.pvk MySSL.cer

To customize:
-n with the service URL. Wildcards ("CN=*.cloudapp.net") and alternative names
("CN=myservice1.cloudapp.net, CN=myservice2.cloudapp.net") are supported.
-e with the certificate expiration date Create a strong password and specify it when prompted.

Create PFX file for self-signed TLS/SSL certificate


Execute:

pvk2pfx -pvk MySSL.pvk -spc MySSL.cer

Enter password and then export certificate with these options:


Yes, export the private key
Export all extended properties

Export TLS/SSL certificate from certificate store


Find certificate
Click Actions -> All tasks -> Export…
Export certificate into a .PFX file with these options:
Yes, export the private key
Include all certificates in the certification path if possible *Export all extended properties

Upload TLS/SSL certificate to cloud service


Upload certificate with the existing or generated .PFX file with the TLS key pair:
Enter the password protecting the private key information

Update TLS/SSL certificate in service configuration file


Update the thumbprint value of the following setting in the service configuration file with the thumbprint of the
certificate uploaded to the cloud service:

<Certificate name="SSL" thumbprint="" thumbprintAlgorithm="sha1" />

Import TLS/SSL certification authority


Follow these steps in all account/machine that will communicate with the service:
Double-click the .CER file in Windows Explorer
In the Certificate dialog, click Install Certificate…
Import certificate into the Trusted Root Certification Authorities store

Turn off client certificate-based authentication


Only client certificate-based authentication is supported and disabling it will allow for public access to the
service endpoints, unless other mechanisms are in place (for example, Microsoft Azure Virtual Network).
Change these settings to false in the service configuration file to turn off the feature:

<Setting name="SetupWebAppForClientCertificates" value="false" />


<Setting name="SetupWebserverForClientCertificates" value="false" />

Then, copy the same thumbprint as the TLS/SSL certificate in the CA certificate setting:

<Certificate name="CA" thumbprint="" thumbprintAlgorithm="sha1" />

Create a self-signed certification authority


Execute the following steps to create a self-signed certificate to act as a Certification Authority:

makecert ^
-n "CN=MyCA" ^
-e MM/DD/YYYY ^
-r -cy authority -h 1 ^
-a sha256 -len 2048 ^
-sr localmachine -ss my ^
MyCA.cer

To customize it
-e with the certification expiration date

Find CA public key


All client certificates must have been issued by a Certification Authority trusted by the service. Find the public
key to the Certification Authority that issued the client certificates that are going to be used for authentication in
order to upload it to the cloud service.
If the file with the public key is not available, export it from the certificate store:
Find certificate
Search for a client certificate issued by the same Certification Authority
Double-click the certificate.
Select the Certification Path tab in the Certificate dialog.
Double-click the CA entry in the path.
Take notes of the certificate properties.
Close the Cer tificate dialog.
Find certificate
Search for the CA noted above.
Click Actions -> All tasks -> Export…
Export certificate into a .CER with these options:
No, do not expor t the private key
Include all certificates in the certification path if possible.
Export all extended properties.

Upload CA certificate to cloud service


Upload certificate with the existing or generated .CER file with the CA public key.

Update CA certificate in service configuration file


Update the thumbprint value of the following setting in the service configuration file with the thumbprint of the
certificate uploaded to the cloud service:

<Certificate name="CA" thumbprint="" thumbprintAlgorithm="sha1" />

Update the value of the following setting with the same thumbprint:

<Setting name="AdditionalTrustedRootCertificationAuthorities" value="" />

Issue client certificates


Each individual authorized to access the service should have a client certificate issued for their exclusive use and
should choose their own strong password to protect its private key.
The following steps must be executed in the same machine where the self-signed CA certificate was generated
and stored:
makecert ^
-n "CN=My ID" ^
-e MM/DD/YYYY ^
-cy end -sky exchange -eku "1.3.6.1.5.5.7.3.2" ^
-a sha256 -len 2048 ^
-in "MyCA" -ir localmachine -is my ^
-sv MyID.pvk MyID.cer

Customizing:
-n with an ID for to the client that will be authenticated with this certificate
-e with the certificate expiration date
MyID.pvk and MyID.cer with unique filenames for this client certificate
This command will prompt for a password to be created and then used once. Use a strong password.

Create PFX files for client certificates


For each generated client certificate, execute:

pvk2pfx -pvk MyID.pvk -spc MyID.cer

Customizing:

MyID.pvk and MyID.cer with the filename for the client certificate

Enter password and then export certificate with these options:


Yes, export the private key
Export all extended properties
The individual to whom this certificate is being issued should choose the export password

Import client certificate


Each individual for whom a client certificate has been issued should import the key pair in the machines they
will use to communicate with the service:
Double-click the .PFX file in Windows Explorer
Import certificate into the Personal store with at least this option:
Include all extended properties checked

Copy client certificate thumbprints


Each individual for whom a client certificate has been issued must follow these steps in order to obtain the
thumbprint of their certificate, which will be added to the service configuration file:
Run certmgr.exe
Select the Personal tab
Double-click the client certificate to be used for authentication
In the Certificate dialog that opens, select the Details tab
Make sure Show is displaying All
Select the field named Thumbprint in the list
Copy the value of the thumbprint
Delete non-visible Unicode characters in front of the first digit
Delete all spaces

Configure Allowed clients in the service configuration file


Update the value of the following setting in the service configuration file with a comma-separated list of the
thumbprints of the client certificates allowed access to the service:

<Setting name="AllowedClientCertificateThumbprints" value="" />

Configure client certificate revocation check


The default setting does not check with the Certification Authority for client certificate revocation status. To turn
on the checks, if the Certification Authority that issued the client certificates supports such checks, change the
following setting with one of the values defined in the X509RevocationMode Enumeration:

<Setting name="ClientCertificateRevocationCheck" value="NoCheck" />

Create PFX file for self-signed encryption certificates


For an encryption certificate, execute:

pvk2pfx -pvk MyID.pvk -spc MyID.cer

Customizing:

MyID.pvk and MyID.cer with the filename for the encryption certificate

Enter password and then export certificate with these options:


Yes, export the private key
Export all extended properties
You will need the password when uploading the certificate to the cloud service.

Export encryption certificate from certificate store


Find certificate
Click Actions -> All tasks -> Export…
Export certificate into a .PFX file with these options:
Yes, export the private key
Include all certificates in the certification path if possible
Export all extended properties

Upload encryption certificate to cloud service


Upload certificate with the existing or generated .PFX file with the encryption key pair:
Enter the password protecting the private key information

Update encryption certificate in service configuration file


Update the thumbprint value of the following settings in the service configuration file with the thumbprint of
the certificate uploaded to the cloud service:

<Certificate name="DataEncryptionPrimary" thumbprint="" thumbprintAlgorithm="sha1" />

Common certificate operations


Configure the TLS/SSL certificate
Configure client certificates

Find certificate
Follow these steps:
1. Run mmc.exe.
2. File -> Add/Remove Snap-in…
3. Select Cer tificates .
4. Click Add .
5. Choose the certificate store location.
6. Click Finish .
7. Click OK .
8. Expand Cer tificates .
9. Expand the certificate store node.
10. Expand the Certificate child node.
11. Select a certificate in the list.

Export certificate
In the Cer tificate Expor t Wizard :
1. Click Next .
2. Select Yes , then Expor t the private key .
3. Click Next .
4. Select the desired output file format.
5. Check the desired options.
6. Check Password .
7. Enter a strong password and confirm it.
8. Click Next .
9. Type or browse a filename where to store the certificate (use a .PFX extension).
10. Click Next .
11. Click Finish .
12. Click OK .

Import certificate
In the Certificate Import Wizard:
1. Select the store location.
Select Current User if only processes running under current user will access the service
Select Local Machine if other processes in this computer will access the service
2. Click Next .
3. If importing from a file, confirm the file path.
4. If importing a .PFX file:
a. Enter the password protecting the private key
b. Select import options
5. Select "Place" certificates in the following store
6. Click Browse .
7. Select the desired store.
8. Click Finish .
If the Trusted Root Certification Authority store was chosen, click Yes .
9. Click OK on all dialog windows.

Upload certificate
In the Azure portal
1. Select Cloud Ser vices .
2. Select the cloud service.
3. On the top menu, click Cer tificates .
4. On the bottom bar, click Upload .
5. Select the certificate file.
6. If it is a .PFX file, enter the password for the private key.
7. Once completed, copy the certificate thumbprint from the new entry in the list.

Other security considerations


The TLS settings described in this document encrypt communication between the service and its clients when
the HTTPS endpoint is used. This is important since credentials for database access and potentially other
sensitive information are contained in the communication. Note, however, that the service persists internal
status, including credentials, in its internal tables in the database in Azure SQL Database that you have provided
for metadata storage in your Microsoft Azure subscription. That database was defined as part of the following
setting in your service configuration file (.CSCFG file):

<Setting name="ElasticScaleMetadata" value="Server=…" />

Credentials stored in this database are encrypted. However, as a best practice, ensure that both web and worker
roles of your service deployments are kept up to date and secure as they both have access to the metadata
database and the certificate used for encryption and decryption of stored credentials.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Adding a shard using Elastic Database tools
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

To add a shard for a new range or key


Applications often need to add new shards to handle data that is expected from new keys or key ranges, for a
shard map that already exists. For example, an application sharded by Tenant ID may need to provision a new
shard for a new tenant, or data sharded monthly may need a new shard provisioned before the start of each
new month.
If the new range of key values is not already part of an existing mapping, it is simple to add the new shard and
associate the new key or range to that shard.
Example: adding a shard and its range to an existing shard map
This sample uses the TryGetShard (Java, .NET) the CreateShard (Java, .NET), CreateRangeMapping (Java, .NET
methods, and creates an instance of the ShardLocation (Java, .NET) class. In the sample below, a database named
sample_shard_2 and all necessary schema objects inside of it have been created to hold range [300, 400).

// sm is a RangeShardMap object.
// Add a new shard to hold the range being added.
Shard shard2 = null;

if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2))


{
shard2 = sm.CreateShard(new ShardLocation(shardServer, "sample_shard_2"));
}

// Create the mapping and associate it with the new shard


sm.CreateRangeMapping(new RangeMappingCreationInfo<long>
(new Range<long>(300, 400), shard2, MappingStatus.Online));

For the .NET version, you can also use PowerShell as an alternative to create a new Shard Map Manager. An
example is available here.

To add a shard for an empty part of an existing range


In some circumstances, you may have already mapped a range to a shard and partially filled it with data, but
you now want upcoming data to be directed to a different shard. For example, you shard by day range and have
already allocated 50 days to a shard, but on day 24, you want future data to land in a different shard. The elastic
database split-merge tool can perform this operation, but if data movement is not necessary (for example, data
for the range of days [25, 50), that is, day 25 inclusive to 50 exclusive, does not yet exist) you can perform this
entirely using the Shard Map Management APIs directly.
Example: splitting a range and assigning the empty portion to a newly added shard
A database named “sample_shard_2” and all necessary schema objects inside of it have been created.
// sm is a RangeShardMap object.
// Add a new shard to hold the range we will move
Shard shard2 = null;

if (!sm.TryGetShard(new ShardLocation(shardServer, "sample_shard_2"),out shard2))


{
shard2 = sm.CreateShard(new ShardLocation(shardServer,"sample_shard_2"));
}

// Split the Range holding Key 25


sm.SplitMapping(sm.GetMappingForKey(25), 25);

// Map new range holding [25-50) to different shard:


// first take existing mapping offline
sm.MarkMappingOffline(sm.GetMappingForKey(25));

// now map while offline to a different shard and take online


RangeMappingUpdate upd = new RangeMappingUpdate();
upd.Shard = shard2;
sm.MarkMappingOnline(sm.UpdateMapping(sm.GetMappingForKey(25), upd));

Impor tant : Use this technique only if you are certain that the range for the updated mapping is empty. The
preceding methods do not check data for the range being moved, so it is best to include checks in your code. If
rows exist in the range being moved, the actual data distribution will not match the updated shard map. Use the
split-merge tool to perform the operation instead in these cases.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Using the RecoveryManager class to fix shard map
problems
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The RecoveryManager class provides ADO.NET applications the ability to easily detect and correct any
inconsistencies between the global shard map (GSM) and the local shard map (LSM) in a sharded database
environment.
The GSM and LSM track the mapping of each database in a sharded environment. Occasionally, a break occurs
between the GSM and the LSM. In that case, use the RecoveryManager class to detect and repair the break.
The RecoveryManager class is part of the Elastic Database client library.

For term definitions, see Elastic Database tools glossary. To understand how the ShardMapManager is used to
manage data in a sharded solution, see Shard map management.

Why use the recovery manager


In a sharded database environment, there is one tenant per database, and many databases per server. There can
also be many servers in the environment. Each database is mapped in the shard map, so calls can be routed to
the correct server and database. Databases are tracked according to a sharding key , and each shard is
assigned a range of key values . For example, a sharding key may represent the customer names from "D" to
"F." The mapping of all shards (also known as databases) and their mapping ranges are contained in the global
shard map (GSM) . Each database also contains a map of the ranges contained on the shard that is known as
the local shard map (LSM) . When an app connects to a shard, the mapping is cached with the app for quick
retrieval. The LSM is used to validate cached data.
The GSM and LSM may become out of sync for the following reasons:
1. The deletion of a shard whose range is believed to no longer be in use, or renaming of a shard. Deleting a
shard results in an orphaned shard mapping . Similarly, a renamed database can cause an orphaned shard
mapping. Depending on the intent of the change, the shard may need to be removed or the shard location
needs to be updated. To recover a deleted database, see Restore a deleted database.
2. A geo-failover event occurs. To continue, one must update the server name, and database name of shard
map manager in the application and then update the shard-mapping details for all shards in a shard map. If
there is a geo-failover, such recovery logic should be automated within the failover workflow. Automating
recovery actions enables a frictionless manageability for geo-enabled databases and avoids manual human
actions. To learn about options to recover a database if there is a data center outage, see Business Continuity
and Disaster Recovery.
3. Either a shard or the ShardMapManager database is restored to an earlier point-in time. To learn about point
in time recovery using backups, see Recovery using backups.
For more information about Azure SQL Database Elastic Database tools, geo-replication and Restore, see the
following:
Overview: Cloud business continuity and database disaster recovery with SQL Database
Get started with elastic database tools
ShardMap Management

Retrieving RecoveryManager from a ShardMapManager


The first step is to create a RecoveryManager instance. The GetRecoveryManager method returns the recovery
manager for the current ShardMapManager instance. To address any inconsistencies in the shard map, you must
first retrieve the RecoveryManager for the particular shard map.

ShardMapManager smm = ShardMapManagerFactory.GetSqlShardMapManager(smmConnectionString,


ShardMapManagerLoadPolicy.Lazy);
RecoveryManager rm = smm.GetRecoveryManager();

In this example, the RecoveryManager is initialized from the ShardMapManager. The ShardMapManager
containing a ShardMap is also already initialized.
Since this application code manipulates the shard map itself, the credentials used in the factory method (in the
preceding example, smmConnectionString) should be credentials that have read-write permissions on the GSM
database referenced by the connection string. These credentials are typically different from credentials used to
open connections for data-dependent routing. For more information, see Using credentials in the elastic
database client.

Removing a shard from the ShardMap after a shard is deleted


The DetachShard method detaches the given shard from the shard map and deletes mappings associated with
the shard.
The location parameter is the shard location, specifically server name and database name, of the shard being
detached.
The shardMapName parameter is the shard map name. This is only required when multiple shard maps are
managed by the same shard map manager. Optional.
IMPORTANT
Use this technique only if you are certain that the range for the updated mapping is empty. The methods above do not
check data for the range being moved, so it is best to include checks in your code.

This example removes shards from the shard map.

rm.DetachShard(s.Location, customerMap);

The shard map reflects the shard location in the GSM before the deletion of the shard. Because the shard was
deleted, it is assumed this was intentional, and the sharding key range is no longer in use. If not, you can execute
point-in time restore. to recover the shard from an earlier point-in-time. (In that case, review the following
section to detect shard inconsistencies.) To recover, see Point in time recovery.
Since it is assumed the database deletion was intentional, the final administrative cleanup action is to delete the
entry to the shard in the shard map manager. This prevents the application from inadvertently writing
information to a range that is not expected.

To detect mapping differences


The DetectMappingDifferences method selects and returns one of the shard maps (either local or global) as the
source of truth and reconciles mappings on both shard maps (GSM and LSM).

rm.DetectMappingDifferences(location, shardMapName);

The location specifies the server name and database name.


The shardMapName parameter is the shard map name. This is only required if multiple shard maps are
managed by the same shard map manager. Optional.

To resolve mapping differences


The ResolveMappingDifferences method selects one of the shard maps (either local or global) as the source of
truth and reconciles mappings on both shard maps (GSM and LSM).

ResolveMappingDifferences (RecoveryToken, MappingDifferenceResolution.KeepShardMapping);

The RecoveryToken parameter enumerates the differences in the mappings between the GSM and the LSM
for the specific shard.
The MappingDifferenceResolution enumeration is used to indicate the method for resolving the difference
between the shard mappings.
MappingDifferenceResolution.KeepShardMapping is recommended that when the LSM contains the
accurate mapping and therefore the mapping in the shard should be used. This is typically the case if there is
a failover: the shard now resides on a new server. Since the shard must first be removed from the GSM
(using the RecoveryManager.DetachShard method), a mapping no longer exists on the GSM. Therefore, the
LSM must be used to re-establish the shard mapping.

Attach a shard to the ShardMap after a shard is restored


The AttachShard method attaches the given shard to the shard map. It then detects any shard map
inconsistencies and updates the mappings to match the shard at the point of the shard restoration. It is assumed
that the database is also renamed to reflect the original database name (before the shard was restored), since
the point-in time restoration defaults to a new database appended with the timestamp.

rm.AttachShard(location, shardMapName)

The location parameter is the server name and database name, of the shard being attached.
The shardMapName parameter is the shard map name. This is only required when multiple shard maps are
managed by the same shard map manager. Optional.
This example adds a shard to the shard map that has been recently restored from an earlier point-in time. Since
the shard (namely the mapping for the shard in the LSM) has been restored, it is potentially inconsistent with
the shard entry in the GSM. Outside of this example code, the shard was restored and renamed to the original
name of the database. Since it was restored, it is assumed the mapping in the LSM is the trusted mapping.

rm.AttachShard(s.Location, customerMap);
var gs = rm.DetectMappingDifferences(s.Location);
foreach (RecoveryToken g in gs)
{
rm.ResolveMappingDifferences(g, MappingDifferenceResolution.KeepShardMapping);
}

Updating shard locations after a geo-failover (restore) of the shards


If there is a geo-failover, the secondary database is made write accessible and becomes the new primary
database. The name of the server, and potentially the database (depending on your configuration), may be
different from the original primary. Therefore the mapping entries for the shard in the GSM and LSM must be
fixed. Similarly, if the database is restored to a different name or location, or to an earlier point in time, this
might cause inconsistencies in the shard maps. The Shard Map Manager handles the distribution of open
connections to the correct database. Distribution is based on the data in the shard map and the value of the
sharding key that is the target of the applications request. After a geo-failover, this information must be updated
with the accurate server name, database name and shard mapping of the recovered database.

Best practices
Geo-failover and recovery are operations typically managed by a cloud administrator of the application
intentionally utilizing Azure SQL Database business continuity features. Business continuity planning requires
processes, procedures, and measures to ensure that business operations can continue without interruption. The
methods available as part of the RecoveryManager class should be used within this work flow to ensure the
GSM and LSM are kept up-to-date based on the recovery action taken. There are five basic steps to properly
ensuring the GSM and LSM reflect the accurate information after a failover event. The application code to
execute these steps can be integrated into existing tools and workflow.
1. Retrieve the RecoveryManager from the ShardMapManager.
2. Detach the old shard from the shard map.
3. Attach the new shard to the shard map, including the new shard location.
4. Detect inconsistencies in the mapping between the GSM and LSM.
5. Resolve differences between the GSM and the LSM, trusting the LSM.
This example performs the following steps:
1. Removes shards from the Shard Map that reflect shard locations before the failover event.
2. Attaches shards to the Shard Map reflecting the new shard locations (the parameter
"Configuration.SecondaryServer" is the new server name but the same database name).
3. Retrieves the recovery tokens by detecting mapping differences between the GSM and the LSM for each
shard.
4. Resolves the inconsistencies by trusting the mapping from the LSM of each shard.

var shards = smm.GetShards();


foreach (shard s in shards)
{
if (s.Location.Server == Configuration.PrimaryServer)
{
ShardLocation slNew = new ShardLocation(Configuration.SecondaryServer, s.Location.Database);
rm.DetachShard(s.Location);
rm.AttachShard(slNew);
var gs = rm.DetectMappingDifferences(slNew);
foreach (RecoveryToken g in gs)
{
rm.ResolveMappingDifferences(g, MappingDifferenceResolution.KeepShardMapping);
}
}
}

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Migrate existing databases to scale out
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Easily manage your existing scaled-out sharded databases using tools (such as the Elastic Database client
library). First convert an existing set of databases to use the shard map manager.

Overview
To migrate an existing sharded database:
1. Prepare the shard map manager database.
2. Create the shard map.
3. Prepare the individual shards.
4. Add mappings to the shard map.
These techniques can be implemented using either the .NET Framework client library, or the PowerShell scripts
found at Azure SQL Database - Elastic Database tools scripts. The examples here use the PowerShell scripts.
For more information about the ShardMapManager, see Shard map management. For an overview of the Elastic
Database tools, see Elastic Database features overview.

Prepare the shard map manager database


The shard map manager is a special database that contains the data to manage scaled-out databases. You can
use an existing database, or create a new database. A database acting as shard map manager should not be the
same database as a shard. The PowerShell script does not create the database for you.

Step 1: Create a shard map manager


# Create a shard map manager
New-ShardMapManager -UserName '<user_name>' -Password '<password>' -SqlServerName '<server_name>' -
SqlDatabaseName '<smm_db_name>'
#<server_name> and <smm_db_name> are the server name and database name
# for the new or existing database that should be used for storing
# tenant-database mapping information.

To retrieve the shard map manager


After creation, you can retrieve the shard map manager with this cmdlet. This step is needed every time you
need to use the ShardMapManager object.

# Try to get a reference to the Shard Map Manager


$ShardMapManager = Get-ShardMapManager -UserName '<user_name>' -Password '<password>' -SqlServerName
'<server_name>' -SqlDatabaseName '<smm_db_name>'

Step 2: Create the shard map


Select the type of shard map to create. The choice depends on the database architecture:
1. Single tenant per database (For terms, see the glossary.)
2. Multiple tenants per database (two types):
a. List mapping
b. Range mapping
For a single-tenant model, create a list mapping shard map. The single-tenant model assigns one database per
tenant. This is an effective model for SaaS developers as it simplifies management.

The multi-tenant model assigns several tenants to an individual database (and you can distribute groups of
tenants across multiple databases). Use this model when you expect each tenant to have small data needs. In
this model, assign a range of tenants to a database using range mapping .

Or you can implement a multi-tenant database model using a list mapping to assign multiple tenants to an
individual database. For example, DB1 is used to store information about tenant ID 1 and 5, and DB2 stores data
for tenant 7 and tenant 10.
Based on your choice, choose one of these options:
Option 1: Create a shard map for a list mapping
Create a shard map using the ShardMapManager object.

# $ShardMapManager is the shard map manager object


$ShardMap = New-ListShardMap -KeyType $([int]) -ListShardMapName 'ListShardMap' -ShardMapManager
$ShardMapManager

Option 2: Create a shard map for a range mapping


To utilize this mapping pattern, tenant ID values needs to be continuous ranges, and it is acceptable to have gap
in the ranges by skipping the range when creating the databases.

# $ShardMapManager is the shard map manager object


# 'RangeShardMap' is the unique identifier for the range shard map.
$ShardMap = New-RangeShardMap -KeyType $([int]) -RangeShardMapName 'RangeShardMap' -ShardMapManager
$ShardMapManager

Option 3: List mappings on an individual database


Setting up this pattern also requires creation of a list map as shown in step 2, option 1.

Step 3: Prepare individual shards


Add each shard (database) to the shard map manager. This prepares the individual databases for storing
mapping information. Execute this method on each shard.

Add-Shard -ShardMap $ShardMap -SqlServerName '<shard_server_name>' -SqlDatabaseName '<shard_database_name>'


# The $ShardMap is the shard map created in step 2.

Step 4: Add mappings


The addition of mappings depends on the kind of shard map you created. If you created a list map, you add list
mappings. If you created a range map, you add range mappings.
Option 1: Map the data for a list mapping
Map the data by adding a list mapping for each tenant.
# Create the mappings and associate it with the new shards
Add-ListMapping -KeyType $([int]) -ListPoint '<tenant_id>' -ListShardMap $ShardMap -SqlServerName
'<shard_server_name>' -SqlDatabaseName '<shard_database_name>'

Option 2: Map the data for a range mapping


Add the range mappings for all the tenant ID range - database associations:

# Create the mappings and associate it with the new shards


Add-RangeMapping -KeyType $([int]) -RangeHigh '5' -RangeLow '1' -RangeShardMap $ShardMap -SqlServerName
'<shard_server_name>' -SqlDatabaseName '<shard_database_name>'

Step 4 option 3: Map the data for multiple tenants on an individual database
For each tenant, run the Add-ListMapping (option 1).

Checking the mappings


Information about the existing shards and the mappings associated with them can be queried using following
commands:

# List the shards and mappings


Get-Shards -ShardMap $ShardMap
Get-Mappings -ShardMap $ShardMap

Summary
Once you have completed the setup, you can begin to use the Elastic Database client library. You can also use
data-dependent routing and multi-shard query.

Next steps
Get the PowerShell scripts from Azure Elastic Database tools scripts.
The Elastic database tools client library is available on GitHub: Azure/elastic-db-tools.
Use the split-merge tool to move data to or from a multi-tenant model to a single tenant model. See Split merge
tool.

Additional resources
For information on common data architecture patterns of multi-tenant software-as-a-service (SaaS) database
applications, see Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database.

Questions and feature requests


For questions, use the Microsoft Q&A question page for SQL Database and for feature requests, add them to the
SQL Database feedback forum.
Create performance counters to track performance
of shard map manager
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Performance counters are used to track the performance of data dependent routing operations. These counters
are accessible in the Performance Monitor, under the "Elastic Database: Shard Management" category.
You can capture the performance of a shard map manager, especially when using data dependent routing.
Counters are created with methods of the Microsoft.Azure.SqlDatabase.ElasticScale.Client class.
For the latest version: Go to Microsoft.Azure.SqlDatabase.ElasticScale.Client. See also Upgrade an app to use
the latest elastic database client library.

Prerequisites
To create the performance category and counters, the user must be a part of the local Administrators
group on the machine hosting the application.
To create a performance counter instance and update the counters, the user must be a member of either the
Administrators or Performance Monitor Users group.

Create performance category and counters


To create the counters, call the CreatePerformanceCategoryAndCounters method of the
ShardMapManagementFactory class. Only an administrator can execute the method:
ShardMapManagerFactory.CreatePerformanceCategoryAndCounters()

You can also use this PowerShell script to execute the method. The method creates the following performance
counters:
Cached mappings : Number of mappings cached for the shard map.
DDR operations/sec : Rate of data dependent routing operations for the shard map. This counter is updated
when a call to OpenConnectionForKey() results in a successful connection to the destination shard.
Mapping lookup cache hits/sec : Rate of successful cache lookup operations for mappings in the shard
map.
Mapping lookup cache misses/sec : Rate of failed cache lookup operations for mappings in the shard
map.
Mappings added or updated in cache/sec : Rate at which mappings are being added or updated in cache
for the shard map.
Mappings removed from cache/sec : Rate at which mappings are being removed from cache for the
shard map.
Performance counters are created for each cached shard map per process.

Notes
The following events trigger the creation of the performance counters:
Initialization of the ShardMapManager with eager loading, if the ShardMapManager contains any shard
maps. These include the GetSqlShardMapManager and the TryGetSqlShardMapManager methods.
Successful lookup of a shard map (using GetShardMap(), GetListShardMap() or GetRangeShardMap()).
Successful creation of shard map using CreateShardMap().
The performance counters will be updated by all cache operations performed on the shard map and mappings.
Successful removal of the shard map using DeleteShardMap() results in deletion of the performance counters
instance.

Best practices
Creation of the performance category and counters should be performed only once before the creation of
ShardMapManager object. Every execution of the command CreatePerformanceCategoryAndCounters()
clears the previous counters (losing data reported by all instances) and creates new ones.
Performance counter instances are created per process. Any application crash or removal of a shard map
from the cache will result in deletion of the performance counters instances.
See also
Elastic Database features overview

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Elastic Database client library with Entity Framework
7/12/2022 • 16 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This document shows the changes in an Entity Framework application that are needed to integrate with the
Elastic Database tools. The focus is on composing shard map management and data-dependent routing with the
Entity Framework Code First approach. The Code First - New Database tutorial for EF serves as the running
example throughout this document. The sample code accompanying this document is part of elastic database
tools' set of samples in the Visual Studio Code Samples.

Downloading and Running the Sample Code


To download the code for this article:
Visual Studio 2012 or later is required.
Download the Elastic DB Tools for Azure SQL - Entity Framework Integration sample. Unzip the sample to a
location of your choosing.
Start Visual Studio.
In Visual Studio, select File -> Open Project/Solution.
In the Open Project dialog, navigate to the sample you downloaded and select
EntityFrameworkCodeFirst.sln to open the sample.
To run the sample, you need to create three empty databases in Azure SQL Database:
Shard Map Manager database
Shard 1 database
Shard 2 database
Once you have created these databases, fill in the place holders in Program.cs with your server name, the
database names, and your credentials to connect to the databases. Build the solution in Visual Studio. Visual
Studio downloads the required NuGet packages for the elastic database client library, Entity Framework, and
Transient Fault handling as part of the build process. Make sure that restoring NuGet packages is enabled for
your solution. You can enable this setting by right-clicking on the solution file in the Visual Studio Solution
Explorer.

Entity Framework workflows


Entity Framework developers rely on one of the following four workflows to build applications and to ensure
persistence for application objects:
Code First (New Database) : The EF developer creates the model in the application code and then EF
generates the database from it.
Code First (Existing Database) : The developer lets EF generate the application code for the model from an
existing database.
Model First : The developer creates the model in the EF designer and then EF creates the database from the
model.
Database First : The developer uses EF tooling to infer the model from an existing database.
All these approaches rely on the DbContext class to transparently manage database connections and database
schema for an application. Different constructors on the DbContext base class allow for different levels of
control over connection creation, database bootstrapping, and schema creation. Challenges arise primarily from
the fact that the database connection management provided by EF intersects with the connection management
capabilities of the data-dependent routing interfaces provided by the elastic database client library.

Elastic database tools assumptions


For term definitions, see Elastic Database tools glossary.
With elastic database client library, you define partitions of your application data called shardlets. Shardlets are
identified by a sharding key and are mapped to specific databases. An application may have as many databases
as needed and distribute the shardlets to provide enough capacity or performance given current business
requirements. The mapping of sharding key values to the databases is stored by a shard map provided by the
elastic database client APIs. This capability is called Shard Map Management , or SMM for short. The shard
map also serves as the broker of database connections for requests that carry a sharding key. This capability is
known as data-dependent routing .
The shard map manager protects users from inconsistent views into shardlet data that can occur when
concurrent shardlet management operations (such as relocating data from one shard to another) are happening.
To do so, the shard maps managed by the client library broker the database connections for an application. This
allows the shard map functionality to automatically kill a database connection when shard management
operations could impact the shardlet that the connection has been created for. This approach needs to integrate
with some of EF’s functionality, such as creating new connections from an existing one to check for database
existence. In general, our observation has been that the standard DbContext constructors only work reliably for
closed database connections that can safely be cloned for EF work. The design principle of elastic database
instead is to only broker opened connections. One might think that closing a connection brokered by the client
library before handing it over to the EF DbContext may solve this issue. However, by closing the connection and
relying on EF to reopen it, one foregoes the validation and consistency checks performed by the library. The
migrations functionality in EF, however, uses these connections to manage the underlying database schema in a
way that is transparent to the application. Ideally, you will retain and combine all these capabilities from both the
elastic database client library and EF in the same application. The following section discusses these properties
and requirements in more detail.

Requirements
When working with both the elastic database client library and Entity Framework APIs, you want to retain the
following properties:
Scale-out : To add or remove databases from the data tier of the sharded application as necessary for the
capacity demands of the application. This means control over the creation and deletion of databases and
using the elastic database shard map manager APIs to manage databases, and mappings of shardlets.
Consistency : The application employs sharding, and uses the data-dependent routing capabilities of the
client library. To avoid corruption or wrong query results, connections are brokered through the shard map
manager. This also retains validation and consistency.
Code First : To retain the convenience of EF’s code first paradigm. In Code First, classes in the application are
mapped transparently to the underlying database structures. The application code interacts with DbSets that
mask most aspects involved in the underlying database processing.
Schema : Entity Framework handles initial database schema creation and subsequent schema evolution
through migrations. By retaining these capabilities, adapting your app is easy as the data evolves.
The following guidance instructs how to satisfy these requirements for Code First applications using elastic
database tools.

Data-dependent routing using EF DbContext


Database connections with Entity Framework are typically managed through subclasses of DbContext . Create
these subclasses by deriving from DbContext . This is where you define your DbSets that implement the
database-backed collections of CLR objects for your application. In the context of data-dependent routing, you
can identify several helpful properties that do not necessarily hold for other EF code first application scenarios:
The database already exists and has been registered in the elastic database shard map.
The schema of the application has already been deployed to the database (explained below).
Data-dependent routing connections to the database are brokered by the shard map.
To integrate DbContexts with data-dependent routing for scale-out:
1. Create physical database connections through the elastic database client interfaces of the shard map
manager.
2. Wrap the connection with the DbContext subclass
3. Pass the connection down into the DbContext base classes to ensure all the processing on the EF side
happens as well.
The following code example illustrates this approach. (This code is also in the accompanying Visual Studio
project)

public class ElasticScaleContext<T> : DbContext


{
public DbSet<Blog> Blogs { get; set; }
...

// C'tor for data-dependent routing. This call opens a validated connection


// routed to the proper shard by the shard map manager.
// Note that the base class c'tor call fails for an open connection
// if migrations need to be done and SQL credentials are used. This is the reason for the
// separation of c'tors into the data-dependent routing case (this c'tor) and the internal c'tor for new
shards.
public ElasticScaleContext(ShardMap shardMap, T shardingKey, string connectionStr)
: base(CreateDDRConnection(shardMap, shardingKey, connectionStr),
true /* contextOwnsConnection */)
{
}

// Only static methods are allowed in calls into base class c'tors.
private static DbConnection CreateDDRConnection(
ShardMap shardMap,
T shardingKey,
string connectionStr)
{
// No initialization
Database.SetInitializer<ElasticScaleContext<T>>(null);

// Ask shard map to broker a validated connection for the given key
SqlConnection conn = shardMap.OpenConnectionForKey<T>
(shardingKey, connectionStr, ConnectionOptions.Validate);
return conn;
}

Main points
A new constructor replaces the default constructor in the DbContext subclass
The new constructor takes the arguments that are required for data-dependent routing through elastic
database client library:
the shard map to access the data-dependent routing interfaces,
the sharding key to identify the shardlet,
a connection string with the credentials for the data-dependent routing connection to the shard.
The call to the base class constructor takes a detour into a static method that performs all the steps
necessary for data-dependent routing.
It uses the OpenConnectionForKey call of the elastic database client interfaces on the shard map to
establish an open connection.
The shard map creates the open connection to the shard that holds the shardlet for the given sharding
key.
This open connection is passed back to the base class constructor of DbContext to indicate that this
connection is to be used by EF instead of letting EF create a new connection automatically. This way
the connection has been tagged by the elastic database client API so that it can guarantee consistency
under shard map management operations.
Use the new constructor for your DbContext subclass instead of the default constructor in your code. Here is an
example:

// Create and save a new blog.

Console.Write("Enter a name for a new blog: ");


var name = Console.ReadLine();

using (var db = new ElasticScaleContext<int>(


sharding.ShardMap,
tenantId1,
connStrBldr.ConnectionString))
{
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();

// Display all Blogs for tenant 1


var query = from b in db.Blogs
orderby b.Name
select b;

}

The new constructor opens the connection to the shard that holds the data for the shardlet identified by the
value of tenantid1 . The code in the using block stays unchanged to access the DbSet for blogs using EF on the
shard for tenantid1 . This changes semantics for the code in the using block such that all database operations
are now scoped to the one shard where tenantid1 is kept. For instance, a LINQ query over the blogs DbSet
would only return blogs stored on the current shard, but not the ones stored on other shards.
Transient faults handling
The Microsoft Patterns & Practices team published the The Transient Fault Handling Application Block. The
library is used with elastic scale client library in combination with EF. However, ensure that any transient
exception returns to a place where you can ensure that the new constructor is being used after a transient fault
so that any new connection attempt is made using the constructors you tweaked. Otherwise, a connection to the
correct shard is not guaranteed, and there are no assurances the connection is maintained as changes to the
shard map occur.
The following code sample illustrates how a SQL retry policy can be used around the new DbContext subclass
constructors:
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap,
tenantId1,
connStrBldr.ConnectionString))
{
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();

}
});

SqlDatabaseUtils.SqlRetr yPolicy in the code above is defined as a


SqlDatabaseTransientErrorDetectionStrategy with a retry count of 10, and 5 seconds wait time between
retries. This approach is similar to the guidance for EF and user-initiated transactions (see Limitations with
Retrying Execution Strategies (EF6 onwards). Both situations require that the application program controls the
scope to which the transient exception returns: to either reopen the transaction, or (as shown) recreate the
context from the proper constructor that uses the elastic database client library.
The need to control where transient exceptions take us back in scope also precludes the use of the built-in
SqlAzureExecutionStrategy that comes with EF. SqlAzureExecutionStrategy would reopen a connection
but not use OpenConnectionForKey and therefore bypass all the validation that is performed as part of the
OpenConnectionForKey call. Instead, the code sample uses the built-in DefaultExecutionStrategy that also
comes with EF. As opposed to SqlAzureExecutionStrategy , it works correctly in combination with the retry
policy from Transient Fault Handling. The execution policy is set in the ElasticScaleDbConfiguration class.
Note that we decided not to use DefaultSqlExecutionStrategy since it suggests using
SqlAzureExecutionStrategy if transient exceptions occur - which would lead to wrong behavior as discussed.
For more information on the different retry policies and EF, see Connection Resiliency in EF.
Constructor rewrites
The code examples above illustrate the default constructor re-writes required for your application in order to
use data-dependent routing with the Entity Framework. The following table generalizes this approach to other
constructors.

REW RIT T EN C O N ST RUC TO R


C URREN T C O N ST RUC TO R F O R DATA B A SE C O N ST RUC TO R N OT ES

MyContext() ElasticScaleContext(ShardM DbContext(DbConnection, The connection needs to be


ap, TKey) bool) a function of the shard map
and the data-dependent
routing key. You need to
by-pass automatic
connection creation by EF
and instead use the shard
map to broker the
connection.

MyContext(string) ElasticScaleContext(ShardM DbContext(DbConnection, The connection is a function


ap, TKey) bool) of the shard map and the
data-dependent routing
key. A fixed database name
or connection string does
not work as they by-pass
validation by the shard
map.
REW RIT T EN C O N ST RUC TO R
C URREN T C O N ST RUC TO R F O R DATA B A SE C O N ST RUC TO R N OT ES

MyContext(DbCompiledMo ElasticScaleContext(ShardM DbContext(DbConnection, The connection gets created


del) ap, TKey, DbCompiledModel, bool) for the given shard map
DbCompiledModel) and sharding key with the
model provided. The
compiled model is passed
on to the base c’tor.

MyContext(DbConnection, ElasticScaleContext(ShardM DbContext(DbConnection, The connection needs to be


bool) ap, TKey, bool) bool) inferred from the shard
map and the key. It cannot
be provided as an input
(unless that input was
already using the shard
map and the key). The
Boolean is passed on.

MyContext(string, ElasticScaleContext(ShardM DbContext(DbConnection, The connection needs to be


DbCompiledModel) ap, TKey, DbCompiledModel, bool) inferred from the shard
DbCompiledModel) map and the key. It cannot
be provided as an input
(unless that input was using
the shard map and the key).
The compiled model is
passed on.

MyContext(ObjectContext, ElasticScaleContext(ShardM DbContext(ObjectContext, The new constructor needs


bool) ap, TKey, ObjectContext, bool) to ensure that any
bool) connection in the
ObjectContext passed as an
input is re-routed to a
connection managed by
Elastic Scale. A detailed
discussion of
ObjectContexts is beyond
the scope of this document.

MyContext(DbConnection, ElasticScaleContext(ShardM DbContext(DbConnection, The connection needs to be


DbCompiledModel, bool) ap, TKey, DbCompiledModel, bool); inferred from the shard
DbCompiledModel, bool) map and the key. The
connection cannot be
provided as an input (unless
that input was already
using the shard map and
the key). Model and
Boolean are passed on to
the base class constructor.

Shard schema deployment through EF migrations


Automatic schema management is a convenience provided by the Entity Framework. In the context of
applications using elastic database tools, you want to retain this capability to automatically provision the schema
to newly created shards when databases are added to the sharded application. The primary use case is to
increase capacity at the data tier for sharded applications using EF. Relying on EF’s capabilities for schema
management reduces the database administration effort with a sharded application built on EF.
Schema deployment through EF migrations works best on unopened connections . This is in contrast to the
scenario for data-dependent routing that relies on the opened connection provided by the elastic database client
API. Another difference is the consistency requirement: While desirable to ensure consistency for all data-
dependent routing connections to protect against concurrent shard map manipulation, it is not a concern with
initial schema deployment to a new database that has not yet been registered in the shard map, and not yet
been allocated to hold shardlets. You can therefore rely on regular database connections for this scenario, as
opposed to data-dependent routing.
This leads to an approach where schema deployment through EF migrations is tightly coupled with the
registration of the new database as a shard in the application’s shard map. This relies on the following
prerequisites:
The database has already been created.
The database is empty - it holds no user schema and no user data.
The database cannot yet be accessed through the elastic database client APIs for data-dependent routing.
With these prerequisites in place, you can create a regular un-opened SqlConnection to kick off EF migrations
for schema deployment. The following code sample illustrates this approach.

// Enter a new shard - i.e. an empty database - to the shard map, allocate a first tenant to it
// and kick off EF initialization of the database to deploy schema

public void RegisterNewShard(string server, string database, string connStr, int key)
{

Shard shard = this.ShardMap.CreateShard(new ShardLocation(server, database));

SqlConnectionStringBuilder connStrBldr = new SqlConnectionStringBuilder(connStr);


connStrBldr.DataSource = server;
connStrBldr.InitialCatalog = database;

// Go into a DbContext to trigger migrations and schema deployment for the new shard.
// This requires an un-opened connection.
using (var db = new ElasticScaleContext<int>(connStrBldr.ConnectionString))
{
// Run a query to engage EF migrations
(from b in db.Blogs
select b).Count();
}

// Register the mapping of the tenant to the shard in the shard map.
// After this step, data-dependent routing on the shard map can be used

this.ShardMap.CreatePointMapping(key, shard);
}

This sample shows the method RegisterNewShard that registers the shard in the shard map, deploys the
schema through EF migrations, and stores a mapping of a sharding key to the shard. It relies on a constructor of
the DbContext subclass (ElasticScaleContext in the sample) that takes a SQL connection string as input. The
code of this constructor is straight-forward, as the following example shows:
// C'tor to deploy schema and migrations to a new shard
protected internal ElasticScaleContext(string connectionString)
: base(SetInitializerForConnection(connectionString))
{
}

// Only static methods are allowed in calls into base class c'tors
private static string SetInitializerForConnection(string connectionString)
{
// You want existence checks so that the schema can get deployed
Database.SetInitializer<ElasticScaleContext<T>>(
new CreateDatabaseIfNotExists<ElasticScaleContext<T>>());

return connectionString;
}

One might have used the version of the constructor inherited from the base class. But the code needs to ensure
that the default initializer for EF is used when connecting. Hence the short detour into the static method before
calling into the base class constructor with the connection string. Note that the registration of shards should run
in a different app domain or process to ensure that the initializer settings for EF do not conflict.

Limitations
The approaches outlined in this document entail a couple of limitations:
EF applications that use LocalDb first need to migrate to a regular SQL Server database before using elastic
database client library. Scaling out an application through sharding with Elastic Scale is not possible with
LocalDb . Note that development can still use LocalDb .
Any changes to the application that imply database schema changes need to go through EF migrations on all
shards. The sample code for this document does not demonstrate how to do this. Consider using Update-
Database with a ConnectionString parameter to iterate over all shards; or extract the T-SQL script for the
pending migration using Update-Database with the -Script option and apply the T-SQL script to your shards.
Given a request, it is assumed that all of its database processing is contained within a single shard as
identified by the sharding key provided by the request. However, this assumption does not always hold true.
For example, when it is not possible to make a sharding key available. To address this, the client library
provides the MultiShardQuer y class that implements a connection abstraction for querying over several
shards. Learning to use the MultiShardQuer y in combination with EF is beyond the scope of this document

Conclusion
Through the steps outlined in this document, EF applications can use the elastic database client library's
capability for data-dependent routing by refactoring constructors of the DbContext subclasses used in the EF
application. This limits the changes required to those places where DbContext classes already exist. In addition,
EF applications can continue to benefit from automatic schema deployment by combining the steps that invoke
the necessary EF migrations with the registration of new shards and mappings in the shard map.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Using the elastic database client library with Dapper
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This document is for developers that rely on Dapper to build applications, but also want to embrace elastic
database tooling to create applications that implement sharding to scale out their data tier. This document
illustrates the changes in Dapper-based applications that are necessary to integrate with elastic database tools.
Our focus is on composing the elastic database shard management and data-dependent routing with Dapper.
Sample Code : Elastic database tools for Azure SQL Database - Dapper integration.
Integrating Dapper and DapperExtensions with the elastic database client library for Azure SQL Database is
easy. Your applications can use data-dependent routing by changing the creation and opening of new
SqlConnection objects to use the OpenConnectionForKey call from the client library. This limits changes in your
application to only where new connections are created and opened.

Dapper overview
Dapper is an object-relational mapper. It maps .NET objects from your application to a relational database (and
vice versa). The first part of the sample code illustrates how you can integrate the elastic database client library
with Dapper-based applications. The second part of the sample code illustrates how to integrate when using
both Dapper and DapperExtensions.
The mapper functionality in Dapper provides extension methods on database connections that simplify
submitting T-SQL statements for execution or querying the database. For instance, Dapper makes it easy to map
between your .NET objects and the parameters of SQL statements for Execute calls, or to consume the results of
your SQL queries into .NET objects using Quer y calls from Dapper.
When using DapperExtensions, you no longer need to provide the SQL statements. Extensions methods such as
GetList or Inser t over the database connection create the SQL statements behind the scenes.
Another benefit of Dapper and also DapperExtensions is that the application controls the creation of the
database connection. This helps interact with the elastic database client library which brokers database
connections based on the mapping of shardlets to databases.
To get the Dapper assemblies, see Dapper dot net. For the Dapper extensions, see DapperExtensions.

A quick look at the elastic database client library


With the elastic database client library, you define partitions of your application data called shardlets, map them
to databases, and identify them by sharding keys. You can have as many databases as you need and distribute
your shardlets across these databases. The mapping of sharding key values to the databases is stored by a shard
map provided by the library’s APIs. This capability is called shard map management . The shard map also
serves as the broker of database connections for requests that carry a sharding key. This capability is referred to
as data-dependent routing .
The shard map manager protects users from inconsistent views into shardlet data that can occur when
concurrent shardlet management operations are happening on the databases. To do so, the shard maps broker
the database connections for an application built with the library. When shard management operations could
impact the shardlet, this allows the shard map functionality to automatically kill a database connection.
Instead of using the traditional way to create connections for Dapper, you need to use the
OpenConnectionForKey method. This ensures that all the validation takes place and connections are managed
properly when any data moves between shards.
Requirements for Dapper integration
When working with both the elastic database client library and the Dapper APIs, you want to retain the following
properties:
Scale out : We want to add or remove databases from the data tier of the sharded application as necessary
for the capacity demands of the application.
Consistency : Since the application is scaled out using sharding, you need to perform data-dependent
routing. We want to use the data-dependent routing capabilities of the library to do so. In particular, you
want to retain the validation and consistency guarantees provided by connections that are brokered through
the shard map manager in order to avoid corruption or wrong query results. This ensures that connections
to a given shardlet are rejected or stopped if (for instance) the shardlet is currently moved to a different
shard using Split/Merge APIs.
Object Mapping : We want to retain the convenience of the mappings provided by Dapper to translate
between classes in the application and the underlying database structures.
The following section provides guidance for these requirements for applications based on Dapper and
DapperExtensions .

Technical guidance
Data-dependent routing with Dapper
With Dapper, the application is typically responsible for creating and opening the connections to the underlying
database. Given a type T by the application, Dapper returns query results as .NET collections of type T. Dapper
performs the mapping from the T-SQL result rows to the objects of type T. Similarly, Dapper maps .NET objects
into SQL values or parameters for data manipulation language (DML) statements. Dapper offers this
functionality via extension methods on the regular SqlConnection object from the ADO .NET SQL Client libraries.
The SQL connection returned by the Elastic Scale APIs for DDR are also regular SqlConnection objects. This
allows us to directly use Dapper extensions over the type returned by the client library’s DDR API, as it is also a
simple SQL Client connection.
These observations make it straightforward to use connections brokered by the elastic database client library
for Dapper.
This code example (from the accompanying sample) illustrates the approach where the sharding key is provided
by the application to the library to broker the connection to the right shard.

using (SqlConnection sqlconn = shardingLayer.ShardMap.OpenConnectionForKey(


key: tenantId1,
connectionString: connStrBldr.ConnectionString,
options: ConnectionOptions.Validate))
{
var blog = new Blog { Name = name };
sqlconn.Execute(@"
INSERT INTO
Blog (Name)
VALUES (@name)", new { name = blog.Name }
);
}

The call to the OpenConnectionForKey API replaces the default creation and opening of a SQL Client connection.
The OpenConnectionForKey call takes the arguments that are required for data-dependent routing:
The shard map to access the data-dependent routing interfaces
The sharding key to identify the shardlet
The credentials (user name and password) to connect to the shard
The shard map object creates a connection to the shard that holds the shardlet for the given sharding key. The
elastic database client APIs also tag the connection to implement its consistency guarantees. Since the call to
OpenConnectionForKey returns a regular SQL Client connection object, the subsequent call to the Execute
extension method from Dapper follows the standard Dapper practice.
Queries work very much the same way - you first open the connection using OpenConnectionForKey from the
client API. Then you use the regular Dapper extension methods to map the results of your SQL query into .NET
objects:

using (SqlConnection sqlconn = shardingLayer.ShardMap.OpenConnectionForKey(


key: tenantId1,
connectionString: connStrBldr.ConnectionString,
options: ConnectionOptions.Validate ))
{
// Display all Blogs for tenant 1
IEnumerable<Blog> result = sqlconn.Query<Blog>(@"
SELECT *
FROM Blog
ORDER BY Name");

Console.WriteLine("All blogs for tenant id {0}:", tenantId1);


foreach (var item in result)
{
Console.WriteLine(item.Name);
}
}

Note that the using block with the DDR connection scopes all database operations within the block to the one
shard where tenantId1 is kept. The query only returns blogs stored on the current shard, but not the ones stored
on any other shards.

Data-dependent routing with Dapper and DapperExtensions


Dapper comes with an ecosystem of additional extensions that can provide further convenience and abstraction
from the database when developing database applications. DapperExtensions is an example.
Using DapperExtensions in your application does not change how database connections are created and
managed. It is still the application’s responsibility to open connections, and regular SQL Client connection
objects are expected by the extension methods. We can rely on the OpenConnectionForKey as outlined above.
As the following code samples show, the only change is that you no longer have to write the T-SQL statements:

using (SqlConnection sqlconn = shardingLayer.ShardMap.OpenConnectionForKey(


key: tenantId2,
connectionString: connStrBldr.ConnectionString,
options: ConnectionOptions.Validate))
{
var blog = new Blog { Name = name2 };
sqlconn.Insert(blog);
}

And here is the code sample for the query:

using (SqlConnection sqlconn = shardingLayer.ShardMap.OpenConnectionForKey(


key: tenantId2,
connectionString: connStrBldr.ConnectionString,
options: ConnectionOptions.Validate))
{
// Display all Blogs for tenant 2
IEnumerable<Blog> result = sqlconn.GetList<Blog>();
Console.WriteLine("All blogs for tenant id {0}:", tenantId2);
foreach (var item in result)
{
Console.WriteLine(item.Name);
}
}

Handling transient faults


The Microsoft Patterns & Practices team published the Transient Fault Handling Application Block to help
application developers mitigate common transient fault conditions encountered when running in the cloud. For
more information, see Perseverance, Secret of All Triumphs: Using the Transient Fault Handling Application Block.
The code sample relies on the transient fault library to protect against transient faults.

SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (SqlConnection sqlconn =
shardingLayer.ShardMap.OpenConnectionForKey(tenantId2, connStrBldr.ConnectionString,
ConnectionOptions.Validate))
{
var blog = new Blog { Name = name2 };
sqlconn.Insert(blog);
}
});

SqlDatabaseUtils.SqlRetr yPolicy in the code above is defined as a


SqlDatabaseTransientErrorDetectionStrategy with a retry count of 10, and 5 seconds wait time between
retries. If you are using transactions, make sure that your retry scope goes back to the beginning of the
transaction in the case of a transient fault.

Limitations
The approaches outlined in this document entail a couple of limitations:
The sample code for this document does not demonstrate how to manage schema across shards.
Given a request, we assume that all its database processing is contained within a single shard as identified by
the sharding key provided by the request. However, this assumption does not always hold, for example, when
it is not possible to make a sharding key available. To address this, the elastic database client library includes
the MultiShardQuery class. The class implements a connection abstraction for querying over several shards.
Using MultiShardQuery in combination with Dapper is beyond the scope of this document.

Conclusion
Applications using Dapper and DapperExtensions can easily benefit from elastic database tools for Azure SQL
Database. Through the steps outlined in this document, those applications can use the tool's capability for data-
dependent routing by changing the creation and opening of new SqlConnection objects to use the
OpenConnectionForKey call of the elastic database client library. This limits the application changes required to
those places where new connections are created and opened.

Additional resources
Not using elastic database tools yet? Check out our Getting Started Guide. For questions, contact us on the
Microsoft Q&A question page for SQL Database and for feature requests, add new ideas or vote for existing
ideas in the SQL Database feedback forum.
Get started with cross-database queries (vertical
partitioning) (preview)
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Elastic database query (preview) for Azure SQL Database allows you to run T-SQL queries that span multiple
databases using a single connection point. This article applies to vertically partitioned databases.
When completed, you will: learn how to configure and use an Azure SQL Database to perform queries that span
multiple related databases.
For more information about the elastic database query feature, see Azure SQL Database elastic database query
overview.

Prerequisites
ALTER ANY EXTERNAL DATA SOURCE permission is required. This permission is included with the ALTER
DATABASE permission. ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying
data source.

Create the sample databases


To start with, create two databases, Customers and Orders , either in the same or different servers.
Execute the following queries on the Orders database to create the OrderInformation table and input the
sample data.

CREATE TABLE [dbo].[OrderInformation](


[OrderID] [int] NOT NULL,
[CustomerID] [int] NOT NULL
)
INSERT INTO [dbo].[OrderInformation] ([OrderID], [CustomerID]) VALUES (123, 1)
INSERT INTO [dbo].[OrderInformation] ([OrderID], [CustomerID]) VALUES (149, 2)
INSERT INTO [dbo].[OrderInformation] ([OrderID], [CustomerID]) VALUES (857, 2)
INSERT INTO [dbo].[OrderInformation] ([OrderID], [CustomerID]) VALUES (321, 1)
INSERT INTO [dbo].[OrderInformation] ([OrderID], [CustomerID]) VALUES (564, 8)

Now, execute following query on the Customers database to create the CustomerInformation table and
input the sample data.

CREATE TABLE [dbo].[CustomerInformation](


[CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NULL,
[Company] [varchar](50) NULL
CONSTRAINT [CustID] PRIMARY KEY CLUSTERED ([CustomerID] ASC)
)
INSERT INTO [dbo].[CustomerInformation] ([CustomerID], [CustomerName], [Company]) VALUES (1, 'Jack', 'ABC')
INSERT INTO [dbo].[CustomerInformation] ([CustomerID], [CustomerName], [Company]) VALUES (2, 'Steve', 'XYZ')
INSERT INTO [dbo].[CustomerInformation] ([CustomerID], [CustomerName], [Company]) VALUES (3, 'Lylla', 'MNO')

Create database objects


Database scoped master key and credentials
1. Open SQL Server Management Studio or SQL Server Data Tools in Visual Studio.
2. Connect to the Orders database and execute the following T-SQL commands:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<master_key_password>';


CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred
WITH IDENTITY = '<username>',
SECRET = '<password>';

The "master_key_password" is a strong password of your choosing used to encrypt the connection
credentials. The "username" and "password" should be the username and password used to log in into
the Customers database (create a new user in Customers database if one does not already exists).
Authentication using Azure Active Directory with elastic queries is not currently supported.
External data sources
To create an external data source, execute the following command on the Orders database:

CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH


(TYPE = RDBMS,
LOCATION = '<server_name>.database.windows.net',
DATABASE_NAME = 'Customers',
CREDENTIAL = ElasticDBQueryCred,
) ;

External tables
Create an external table on the Orders database, which matches the definition of the CustomerInformation table:

CREATE EXTERNAL TABLE [dbo].[CustomerInformation]


( [CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NOT NULL,
[Company] [varchar](50) NOT NULL)
WITH
( DATA_SOURCE = MyElasticDBQueryDataSrc)

Execute a sample elastic database T-SQL query


Once you have defined your external data source and your external tables, you can now use T-SQL to query
your external tables. Execute this query on the Orders database:

SELECT OrderInformation.CustomerID, OrderInformation.OrderId, CustomerInformation.CustomerName,


CustomerInformation.Company
FROM OrderInformation
INNER JOIN CustomerInformation
ON CustomerInformation.CustomerID = OrderInformation.CustomerID

Cost
Currently, the elastic database query feature is included into the cost of your Azure SQL Database.
For pricing information, see SQL Database Pricing.

Next steps
For an overview of elastic query, see Elastic query overview.
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Reporting across scaled-out cloud databases
(preview)
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

Sharded databases distribute rows across a scaled out data tier. The schema is identical on all participating
databases, also known as horizontal partitioning. Using an elastic query, you can create reports that span all
databases in a sharded database.
For a quickstart, see Reporting across scaled-out cloud databases.
For non-sharded databases, see Query across cloud databases with different schemas.

Prerequisites
Create a shard map using the elastic database client library. see Shard map management. Or use the sample
app in Get started with elastic database tools.
Alternatively, see Migrate existing databases to scaled-out databases.
The user must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with
the ALTER DATABASE permission.
ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.

Overview
These statements create the metadata representation of your sharded data tier in the elastic query database.
1. CREATE MASTER KEY
2. CREATE DATABASE SCOPED CREDENTIAL
3. CREATE EXTERNAL DATA SOURCE
4. CREATE EXTERNAL TABLE

1.1 Create database scoped master key and credentials


The credential is used by the elastic query to connect to your remote databases.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password';
CREATE DATABASE SCOPED CREDENTIAL [<credential_name>] WITH IDENTITY = '<username>',
SECRET = '<password>';

NOTE
Make sure that the "<username>" does not include any "@servername" suffix.

1.2 Create external data sources


Syntax:

<External_Data_Source> ::=
CREATE EXTERNAL DATA SOURCE <data_source_name> WITH
(TYPE = SHARD_MAP_MANAGER,
LOCATION = '<fully_qualified_server_name>',
DATABASE_NAME = ‘<shardmap_database_name>',
CREDENTIAL = <credential_name>,
SHARD_MAP_NAME = ‘<shardmapname>’
) [;]

Example

CREATE EXTERNAL DATA SOURCE MyExtSrc


WITH
(
TYPE=SHARD_MAP_MANAGER,
LOCATION='myserver.database.windows.net',
DATABASE_NAME='ShardMapDatabase',
CREDENTIAL= SMMUser,
SHARD_MAP_NAME='ShardMap'
);

Retrieve the list of current external data sources:

select * from sys.external_data_sources;

The external data source references your shard map. An elastic query then uses the external data source and the
underlying shard map to enumerate the databases that participate in the data tier. The same credentials are used
to read the shard map and to access the data on the shards during the processing of an elastic query.

1.3 Create external tables


Syntax:

CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name


( { <column_definition> } [ ,...n ])
{ WITH ( <sharded_external_table_options> ) }
) [;]

<sharded_external_table_options> ::=
DATA_SOURCE = <External_Data_Source>,
[ SCHEMA_NAME = N'nonescaped_schema_name',]
[ OBJECT_NAME = N'nonescaped_object_name',]
DISTRIBUTION = SHARDED(<sharding_column_name>) | REPLICATED |ROUND_ROBIN
Example

CREATE EXTERNAL TABLE [dbo].[order_line](


[ol_o_id] int NOT NULL,
[ol_d_id] tinyint NOT NULL,
[ol_w_id] int NOT NULL,
[ol_number] tinyint NOT NULL,
[ol_i_id] int NOT NULL,
[ol_delivery_d] datetime NOT NULL,
[ol_amount] smallmoney NOT NULL,
[ol_supply_w_id] int NOT NULL,
[ol_quantity] smallint NOT NULL,
[ol_dist_info] char(24) NOT NULL
)

WITH
(
DATA_SOURCE = MyExtSrc,
SCHEMA_NAME = 'orders',
OBJECT_NAME = 'order_details',
DISTRIBUTION=SHARDED(ol_w_id)
);

Retrieve the list of external tables from the current database:

SELECT * from sys.external_tables;

To drop external tables:

DROP EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name. ] table_name[;]

Remarks
The DATA_SOURCE clause defines the external data source (a shard map) that is used for the external table.
The SCHEMA_NAME and OBJECT_NAME clauses map the external table definition to a table in a different
schema. If omitted, the schema of the remote object is assumed to be “dbo” and its name is assumed to be
identical to the external table name being defined. This is useful if the name of your remote table is already
taken in the database where you want to create the external table. For example, you want to define an external
table to get an aggregate view of catalog views or DMVs on your scaled out data tier. Since catalog views and
DMVs already exist locally, you cannot use their names for the external table definition. Instead, use a different
name and use the catalog view’s or the DMV’s name in the SCHEMA_NAME and/or OBJECT_NAME clauses. (See
the example below.)
The DISTRIBUTION clause specifies the data distribution used for this table. The query processor utilizes the
information provided in the DISTRIBUTION clause to build the most efficient query plans.
1. SHARDED means data is horizontally partitioned across the databases. The partitioning key for the data
distribution is the <sharding_column_name> parameter.
2. REPLICATED means that identical copies of the table are present on each database. It is your responsibility
to ensure that the replicas are identical across the databases.
3. ROUND_ROBIN means that the table is horizontally partitioned using an application-dependent
distribution method.
Data tier reference : The external table DDL refers to an external data source. The external data source specifies
a shard map that provides the external table with the information necessary to locate all the databases in your
data tier.
Security considerations
Users with access to the external table automatically gain access to the underlying remote tables under the
credential given in the external data source definition. Avoid undesired elevation of privileges through the
credential of the external data source. Use GRANT or REVOKE for an external table as though it were a regular
table.
Once you have defined your external data source and your external tables, you can now use full T-SQL over your
external tables.

Example: querying horizontal partitioned databases


The following query performs a three-way join between warehouses, orders, and order lines and uses several
aggregates and a selective filter. It assumes (1) horizontal partitioning (sharding) and (2) that warehouses,
orders, and order lines are sharded by the warehouse ID column, and that the elastic query can co-locate the
joins on the shards and process the expensive part of the query on the shards in parallel.

select
w_id as warehouse,
o_c_id as customer,
count(*) as cnt_orderline,
max(ol_quantity) as max_quantity,
avg(ol_amount) as avg_amount,
min(ol_delivery_d) as min_deliv_date
from warehouse
join orders
on w_id = o_w_id
join order_line
on o_id = ol_o_id and o_w_id = ol_w_id
where w_id > 100 and w_id < 200
group by w_id, o_c_id

Stored procedure for remote T-SQL execution: sp_execute_remote


Elastic query also introduces a stored procedure that provides direct access to the shards. The stored procedure
is called sp_execute _remote and can be used to execute remote stored procedures or T-SQL code on the remote
databases. It takes the following parameters:
Data source name (nvarchar): The name of the external data source of type RDBMS.
Query (nvarchar): The T-SQL query to be executed on each shard.
Parameter declaration (nvarchar) - optional: String with data type definitions for the parameters used in the
Query parameter (like sp_executesql).
Parameter value list - optional: Comma-separated list of parameter values (like sp_executesql).
The sp_execute_remote uses the external data source provided in the invocation parameters to execute the given
T-SQL statement on the remote databases. It uses the credential of the external data source to connect to the
shardmap manager database and the remote databases.
Example:

EXEC sp_execute_remote
N'MyExtSrc',
N'select count(w_id) as foo from warehouse'

Connectivity for tools


Use regular SQL Server connection strings to connect your application, your BI, and data integration tools to the
database with your external table definitions. Make sure that SQL Server is supported as a data source for your
tool. Then reference the elastic query database like any other SQL Server database connected to the tool, and
use external tables from your tool or application as if they were local tables.

Best practices
Ensure that the elastic query endpoint database has been given access to the shardmap database and all
shards through the SQL Database firewalls.
Validate or enforce the data distribution defined by the external table. If your actual data distribution is
different from the distribution specified in your table definition, your queries may yield unexpected results.
Elastic query currently does not perform shard elimination when predicates over the sharding key would
allow it to safely exclude certain shards from processing.
Elastic query works best for queries where most of the computation can be done on the shards. You typically
get the best query performance with selective filter predicates that can be evaluated on the shards or joins
over the partitioning keys that can be performed in a partition-aligned way on all shards. Other query
patterns may need to load large amounts of data from the shards to the head node and may perform poorly

Next steps
For an overview of elastic query, see Elastic query overview.
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For syntax and sample queries for vertically partitioned data, see Querying vertically partitioned data)
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Query across cloud databases with different
schemas (preview)
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database

Vertically partitioned databases use different sets of tables on different databases. That means that the schema
is different on different databases. For instance, all tables for inventory are on one database while all
accounting-related tables are on a second database.

Prerequisites
The user must possess ALTER ANY EXTERNAL DATA SOURCE permission. This permission is included with
the ALTER DATABASE permission.
ALTER ANY EXTERNAL DATA SOURCE permissions are needed to refer to the underlying data source.

Overview
NOTE
Unlike with horizontal partitioning, these DDL statements do not depend on defining a data tier with a shard map
through the elastic database client library.

1. CREATE MASTER KEY


2. CREATE DATABASE SCOPED CREDENTIAL
3. CREATE EXTERNAL DATA SOURCE
4. CREATE EXTERNAL TABLE
Create database scoped master key and credentials
The credential is used by the elastic query to connect to your remote databases.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'master_key_password';


CREATE DATABASE SCOPED CREDENTIAL [<credential_name>] WITH IDENTITY = '<username>',
SECRET = '<password>';

NOTE
Ensure that the <username> does not include any "@ser vername" suffix.

Create external data sources


Syntax:

<External_Data_Source> ::=
CREATE EXTERNAL DATA SOURCE <data_source_name> WITH
(TYPE = RDBMS,
LOCATION = ’<fully_qualified_server_name>’,
DATABASE_NAME = ‘<remote_database_name>’,
CREDENTIAL = <credential_name>
) [;]

IMPORTANT
The TYPE parameter must be set to RDBMS.

Example
The following example illustrates the use of the CREATE statement for external data sources.

CREATE EXTERNAL DATA SOURCE RemoteReferenceData


WITH
(
TYPE=RDBMS,
LOCATION='myserver.database.windows.net',
DATABASE_NAME='ReferenceData',
CREDENTIAL= SqlUser
);

To retrieve the list of current external data sources:

select * from sys.external_data_sources;

External Tables
Syntax:
CREATE EXTERNAL TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name
( { <column_definition> } [ ,...n ])
{ WITH ( <rdbms_external_table_options> ) }
)[;]

<rdbms_external_table_options> ::=
DATA_SOURCE = <External_Data_Source>,
[ SCHEMA_NAME = N'nonescaped_schema_name',]
[ OBJECT_NAME = N'nonescaped_object_name',]

Example

CREATE EXTERNAL TABLE [dbo].[customer]


(
[c_id] int NOT NULL,
[c_firstname] nvarchar(256) NULL,
[c_lastname] nvarchar(256) NOT NULL,
[street] nvarchar(256) NOT NULL,
[city] nvarchar(256) NOT NULL,
[state] nvarchar(20) NULL,
DATA_SOURCE = RemoteReferenceData
);

The following example shows how to retrieve the list of external tables from the current database:

select * from sys.external_tables;

Remarks
Elastic query extends the existing external table syntax to define external tables that use external data sources of
type RDBMS. An external table definition for vertical partitioning covers the following aspects:
Schema : The external table DDL defines a schema that your queries can use. The schema provided in your
external table definition needs to match the schema of the tables in the remote database where the actual
data is stored.
Remote database reference : The external table DDL refers to an external data source. The external data
source specifies the server name and database name of the remote database where the actual table data is
stored.
Using an external data source as outlined in the previous section, the syntax to create external tables is as
follows:
The DATA_SOURCE clause defines the external data source (i.e. the remote database in vertical partitioning) that
is used for the external table.
The SCHEMA_NAME and OBJECT_NAME clauses allow mapping the external table definition to a table in a
different schema on the remote database, or to a table with a different name, respectively. This mapping is
useful if you want to define an external table to a catalog view or DMV on your remote database - or any other
situation where the remote table name is already taken locally.
The following DDL statement drops an existing external table definition from the local catalog. It does not impact
the remote database.

DROP EXTERNAL TABLE [ [ schema_name ] . | schema_name. ] table_name[;]

Permissions for CREATE/DROP EXTERNAL TABLE : ALTER ANY EXTERNAL DATA SOURCE permissions are
needed for external table DDL, which is also needed to refer to the underlying data source.
Security considerations
Users with access to the external table automatically gain access to the underlying remote tables under the
credential given in the external data source definition. Carefully manage access to the external table, in order to
avoid undesired elevation of privileges through the credential of the external data source. Regular SQL
permissions can be used to GRANT or REVOKE access to an external table just as though it were a regular table.

Example: querying vertically partitioned databases


The following query performs a three-way join between the two local tables for orders and order lines and the
remote table for customers. This is an example of the reference data use case for elastic query:

SELECT
c_id as customer,
c_lastname as customer_name,
count(*) as cnt_orderline,
max(ol_quantity) as max_quantity,
avg(ol_amount) as avg_amount,
min(ol_delivery_d) as min_deliv_date
FROM customer
JOIN orders
ON c_id = o_c_id
JOIN order_line
ON o_id = ol_o_id and o_c_id = ol_c_id
WHERE c_id = 100

Stored procedure for remote T-SQL execution: sp_execute_remote


Elastic query also introduces a stored procedure that provides direct access to the remote database. The stored
procedure is called sp_execute _remote and can be used to execute remote stored procedures or T-SQL code on
the remote database. It takes the following parameters:
Data source name (nvarchar): The name of the external data source of type RDBMS.
Query (nvarchar): The T-SQL query to be executed on the remote database.
Parameter declaration (nvarchar) - optional: String with data type definitions for the parameters used in the
Query parameter (like sp_executesql).
Parameter value list - optional: Comma-separated list of parameter values (like sp_executesql).
The sp_execute_remote uses the external data source provided in the invocation parameters to execute the given
T-SQL statement on the remote database. It uses the credential of the external data source to connect to the
remote database.
Example:

EXEC sp_execute_remote
N'MyExtSrc',
N'select count(w_id) as foo from warehouse'

Connectivity for tools


You can use regular SQL Server connection strings to connect your BI and data integration tools to databases on
the server that has elastic query enabled and external tables defined. Make sure that SQL Server is supported as
a data source for your tool. Then refer to the elastic query database and its external tables just like any other
SQL Server database that you would connect to with your tool.
Best practices
Ensure that the elastic query endpoint database has been given access to the remote database by enabling
access for Azure Services in its Azure SQL Database firewall configuration. Also ensure that the credential
provided in the external data source definition can successfully log into the remote database and has the
permissions to access the remote table.
Elastic query works best for queries where most of the computation can be done on the remote databases.
You typically get the best query performance with selective filter predicates that can be evaluated on the
remote databases or joins that can be performed completely on the remote database. Other query patterns
may need to load large amounts of data from the remote database and may perform poorly.

Next steps
For an overview of elastic query, see Elastic query overview.
For limitations of elastic query, see Preview limitations
For a vertical partitioning tutorial, see Getting started with cross-database query (vertical partitioning).
For a horizontal partitioning (sharding) tutorial, see Getting started with elastic query for horizontal
partitioning (sharding).
For syntax and sample queries for horizontally partitioned data, see Querying horizontally partitioned data)
See sp_execute _remote for a stored procedure that executes a Transact-SQL statement on a single remote
Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.
Get the required values for authenticating an
application to access Azure SQL Database from
code
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


To create and manage Azure SQL Database from code you must register your app in the Azure Active Directory
(Azure AD) domain in the subscription where your Azure resources have been created.

Create a service principal to access resources from an application


The following examples create the Active Directory (AD) application and the service principal that we need to
authenticate our C# app. The script outputs values we need for the preceding C# sample. For detailed
information, see Use Azure PowerShell to create a service principal to access resources.
PowerShell
Azure CLI

IMPORTANT
The PowerShell Azure Resource Manager (RM) module is still supported by SQL Database, but all future development is
for the Az.Sql module. The AzureRM module will continue to receive bug fixes until at least December 2020. The
arguments for the commands in the Az module and in the AzureRm modules are substantially identical. For more about
their compatibility, see Introducing the new Azure PowerShell Az module.
# sign in to Azure
Connect-AzAccount

# for multiple subscriptions, uncomment and set to the subscription you want to work with
#$subscriptionId = "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"
#Set-AzContext -SubscriptionId $subscriptionId

$appName = "{app-name}" # display name for your app, must be unique in your directory
$uri = "http://{app-name}" # does not need to be a real uri
$secret = "{app-password}"

# create an AAD app


$azureAdApplication = New-AzADApplication -DisplayName $appName -HomePage $Uri -IdentifierUris $Uri -
Password $secret

# create a Service Principal for the app


$svcprincipal = New-AzADServicePrincipal -ApplicationId $azureAdApplication.ApplicationId

Start-Sleep -s 15 # to avoid a PrincipalNotFound error, pause here for 15 seconds

# if you still get a PrincipalNotFound error, then rerun the following until successful.
$roleassignment = New-AzRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName
$azureAdApplication.ApplicationId.Guid

# output the values we need for our C# application to successfully authenticate


Write-Output "Copy these values into the C# sample app"

Write-Output "_subscriptionId:" (Get-AzContext).Subscription.SubscriptionId


Write-Output "_tenantId:" (Get-AzContext).Tenant.TenantId
Write-Output "_applicationId:" $azureAdApplication.ApplicationId.Guid
Write-Output "_applicationSecret:" $secret

See also
Create a database in Azure SQL Database with C#
Connect to Azure SQL Database by using Azure Active Directory Authentication
Designing globally available services using Azure
SQL Database
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


When building and deploying cloud services with Azure SQL Database, you use active geo-replication or auto-
failover groups to provide resilience to regional outages and catastrophic failures. The same feature allows you
to create globally distributed applications optimized for local access to the data. This article discusses common
application patterns, including the benefits and trade-offs of each option.

NOTE
If you are using Premium or Business Critical databases and elastic pools, you can make them resilient to regional outages
by converting them to zone redundant deployment configuration. See Zone-redundant databases.

Scenario 1: Using two Azure regions for business continuity with


minimal downtime
In this scenario, the applications have the following characteristics:
Application is active in one Azure region
All database sessions require read and write access (RW) to data
Web tier and data tier must be collocated to reduce latency and traffic cost
Fundamentally, downtime is a higher business risk for these applications than data loss
In this case, the application deployment topology is optimized for handling regional disasters when all
application components need to fail over together. The diagram below shows this topology. For geographic
redundancy, the application’s resources are deployed to Region A and B. However, the resources in Region B are
not utilized until Region A fails. A failover group is configured between the two regions to manage database
connectivity, replication and failover. The web service in both regions is configured to access the database via the
read-write listener <failover-group-name>.database.windows.net (1). Azure Traffic Manager is set up to
use priority routing method (2).

NOTE
Azure Traffic Manager is used throughout this article for illustration purposes only. You can use any load-balancing
solution that supports priority routing method.

The following diagram shows this configuration before an outage:


After an outage in the primary region, SQL Database detects that the primary database is not accessible and
triggers failover to the secondary region based on the parameters of the automatic failover policy (1).
Depending on your application SLA, you can configure a grace period that controls the time between the
detection of the outage and the failover itself. It is possible that Azure Traffic Manager initiates the endpoint
failover before the failover group triggers the failover of the database. In that case the web application cannot
immediately reconnect to the database. But the reconnections will automatically succeed as soon as the
database failover completes. When the failed region is restored and back online, the old primary automatically
reconnects as a new secondary. The diagram below illustrates the configuration after failover.

NOTE
All transactions committed after the failover are lost during the reconnection. After the failover is completed, the
application in region B is able to reconnect and restart processing the user requests. Both the web application and the
primary database are now in region B and remain co-located.
If an outage happens in region B, the replication process between the primary and the secondary database gets
suspended but the link between the two remains intact (1). Traffic Manager detects that connectivity to Region B
is broken and marks the endpoint web app 2 as Degraded (2). The application's performance is not impacted in
this case, but the database becomes exposed and therefore at higher risk of data loss in case region A fails in
succession.

NOTE
For disaster recovery, we recommend the configuration with application deployment limited to two regions. This is
because most of the Azure geographies have only two regions. This configuration does not protect your application from
a simultaneous catastrophic failure of both regions. In an unlikely event of such a failure, you can recover your databases
in a third region using geo-restore operation.

Once the outage is mitigated, the secondary database automatically resynchronizes with the primary. During
synchronization, performance of the primary can be impacted. The specific impact depends on the amount of
data the new primary acquired since the failover.

NOTE
After the outage is mitigated, Traffic Manager will start routing the connections to the application in Region A as a higher
priority end-point. If you intend to keep the primary in Region B for a while, you should change the priority table in the
Trafic Manager profile accordingly.

The following diagram illustrates an outage in the secondary region:


The key advantages of this design pattern are:
The same web application is deployed to both regions without any region-specific configuration and doesn’t
require additional logic to manage failover.
Application performance is not impacted by failover as the web application and the database are always co-
located.
The main tradeoff is that the application resources in Region B are underutilized most of the time.

Scenario 2: Azure regions for business continuity with maximum data


preservation
This option is best suited for applications with the following characteristics:
Any data loss is high business risk. The database failover can only be used as a last resort if the outage is
caused by a catastrophic failure.
The application supports read-only and read-write modes of operations and can operate in "read-only mode"
for a period of time.
In this pattern, the application switches to read-only mode when the read-write connections start getting time-
out errors. The web application is deployed to both regions and includes a connection to the read-write listener
endpoint and different connection to the read-only listener endpoint (1). The Traffic Manager profile should use
priority routing. End point monitoring should be enabled for the application endpoint in each region (2).
The following diagram illustrates this configuration before an outage:
When Traffic Manager detects a connectivity failure to region A, it automatically switches user traffic to the
application instance in region B. With this pattern, it is important that you set the grace period with data loss to a
sufficiently high value, for example 24 hours. It ensures that data loss is prevented if the outage is mitigated
within that time. When the web application in region B is activated the read-write operations start failing. At that
point, it should switch to the read-only mode (1). In this mode the requests are automatically routed to the
secondary database. If the outage is caused by a catastrophic failure, most likely it cannot be mitigated within
the grace period. When it expires the failover group triggers the failover. After that the read-write listener
becomes available and the connections to it stop failing (2). The following diagram illustrates the two stages of
the recovery process.

NOTE
If the outage in the primary region is mitigated within the grace period, Traffic Manager detects the restoration of
connectivity in the primary region and switches user traffic back to the application instance in region A. That application
instance resumes and operates in read-write mode using the primary database in region A as illustrated by the previous
diagram.
If an outage happens in region B, Traffic Manager detects the failure of the end point web-app-2 in region B and
marks it degraded (1). In the meantime, the failover group switches the read-only listener to region A (2). This
outage does not impact the end-user experience but the primary database is exposed during the outage. The
following diagram illustrates a failure in the secondary region:
Once the outage is mitigated, the secondary database is immediately synchronized with the primary and the
read-only listener is switched back to the secondary database in region B. During synchronization performance
of the primary could be slightly impacted depending on the amount of data that needs to be synchronized.
This design pattern has several advantages :
It avoids data loss during the temporary outages.
Downtime depends only on how quickly Traffic Manager detects the connectivity failure, which is
configurable.
The tradeoff is that the application must be able to operate in read-only mode.

Scenario 3: Relocate an application to different geographies to follow


demand
In this scenario, the application has the following characteristics:
The end users access the application from different geographies.
The application includes read-only workloads that do not depend on full synchronization with the latest
updates.
Write access to data should be supported in the same geography for majority of the users.
Read latency is critical for the end-user experience.
In order to meet these requirements, you need to guarantee that the user device always connects to the
application deployed in the same geography for read-only operations, such as browsing data, analytics, etc. In
contrast, online transactional processing (OLTP) operations are processed in the same geography most of the
time . For example, during daytime, OLTP operations are processed in the same geography, but they could be
processed in a different geography during off hours. If the end-user activity mostly happens during typical
working hours, you can guarantee optimal performance for most users most of the time. The following diagram
shows this topology.
The application’s resources should be deployed in each geography where you have substantial usage demand.
For example, if your application is actively used in the United States, East Asia and Europe, the application should
be deployed to all of these geographies (e.g., US West, Japan and UK). The primary database should be
dynamically switched from one geography to the next at the end of typical working hours. This method is called
“follow the sun”. The OLTP workload always connects to the database via the read-write listener <failover-
group-name>.database.windows.net (1). The read-only workload connects to the local database directly
using the databases server endpoint <ser ver-name>.database.windows.net (2). Traffic Manager is
configured with the performance routing method. It ensures that the end-user’s device is connected to the web
service in the closest region. Traffic Manager should be set up with end point monitoring enabled for each web
service end point (3).

NOTE
The failover group configuration defines which region is used for failover. Because the new primary is in a different
geography, the failover results in longer latency for both OLTP and read-only workloads until the impacted region is back
online.
At the end of the work day in US West, for example at 4 PM local time, the active databases should be switched
to the next region, East Asia (Japan), where it is 8 AM. Then, at 4 PM in East Asia, the primary should switch to
Europe (UK) where it is 8 AM. This task can be fully automated by using Azure Logic Apps. The task involves the
following steps:
Switch primary server in the failover group to East Asia using friendly failover (1).
Remove the failover group between US West and East Asia.
Create a new failover group with the same name but between East Asia and Europe (2).
Add the primary in East Asia and secondary in Europe to this failover group (3).
The following diagram illustrates the new configuration after the planned failover:
If an outage happens in East Asia, for example, the automatic database failover is initiated by the failover group,
which effectively results in moving the application to the next region ahead of schedule (1). In that case, US West
is the only remaining secondary region until East Asia is back online. The remaining two regions serve the
customers in all three geographies by switching roles. Azure Logic Apps has to be adjusted accordingly. Because
the remaining regions get additional user traffic from East Asia, the application's performance is impacted not
only by additional latency but also by an increased number of end-user connections. Once the outage is
mitigated, the secondary database there is immediately synchronized with the current primary. The following
diagram illustrates an outage in East Asia:
NOTE
You can reduce the time when the end user’s experience in East Asia is degraded by the long latency. To do that you
should proactively deploy an application copy and create a secondary database(s) in a nearby region (e.g., the Azure
Korea Central data center) as a replacement of the offline application instance in Japan. When the latter is back online you
can decide whether to continue using Korea Central or to remove the copy of the application there and switch back to
using Japan.

The key benefits of this design are:


The read-only application workload accesses data in the closest region at all times.
The read-write application workload accesses data in the closest region during the period of the highest
activity in each geography.
Because the application is deployed to multiple regions, it can survive a loss of one of the regions without
any significant downtime.
But there are some tradeoffs :
A regional outage results in the geography to be impacted by longer latency. Both read-write and read-only
workloads are served by the application in a different geography.
The read-only workloads must connect to a different end point in each region.

Business continuity planning: Choose an application design for cloud


disaster recovery
Your specific cloud disaster recovery strategy can combine or extend these design patterns to best meet the
needs of your application. As mentioned earlier, the strategy you choose is based on the SLA you want to offer
to your customers and the application deployment topology. To help guide your decision, the following table
compares the choices based on recovery point objective (RPO) and estimated recovery time (ERT).
PAT T ERN RP O ERT

Active-passive deployment for disaster Read-write access < 5 sec Failure detection time + DNS TTL
recovery with co-located database
access

Active-active deployment for Read-write access < 5 sec Failure detection time + DNS TTL
application load balancing

Active-passive deployment for data Read-only access < 5 sec Read-only access = 0
preservation

Read-write access = zero Read-write access = Failure detection


time + grace period with data loss

Next steps
For a business continuity overview and scenarios, see Business continuity overview
To learn about active geo-replication, see Active geo-replication.
To learn about auto-failover groups, see Auto-failover groups.
For information about active geo-replication with elastic pools, see Elastic pool disaster recovery strategies.
Disaster recovery strategies for applications using
Azure SQL Database elastic pools
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Azure SQL Database provides several capabilities to provide for the business continuity of your application
when catastrophic incidents occur. Elastic pools and single databases support the same kind of disaster recovery
(DR) capabilities. This article describes several DR strategies for elastic pools that leverage these Azure SQL
Database business continuity features.
This article uses the following canonical SaaS ISV application pattern:
A modern cloud-based web application provisions one database for each end user. The ISV has many customers
and therefore uses many databases, known as tenant databases. Because the tenant databases typically have
unpredictable activity patterns, the ISV uses an elastic pool to make the database cost very predictable over
extended periods of time. The elastic pool also simplifies the performance management when the user activity
spikes. In addition to the tenant databases the application also uses several databases to manage user profiles,
security, collect usage patterns etc. Availability of the individual tenants does not impact the application’s
availability as whole. However, the availability and performance of management databases is critical for the
application’s function and if the management databases are offline the entire application is offline.
This article discusses DR strategies covering a range of scenarios from cost sensitive startup applications to
ones with stringent availability requirements.

NOTE
If you are using Premium or Business Critical databases and elastic pools, you can make them resilient to regional outages
by converting them to zone redundant deployment configuration. See Zone-redundant databases.

Scenario 1. Cost sensitive startup


I am a startup business and am extremely cost sensitive. I want to simplify deployment and management of the
application and I can have a limited SLA for individual customers. But I want to ensure the application as a whole
is never offline.
To satisfy the simplicity requirement, deploy all tenant databases into one elastic pool in the Azure region of
your choice and deploy management databases as geo-replicated single databases. For the disaster recovery of
tenants, use geo-restore, which comes at no additional cost. To ensure the availability of the management
databases, geo-replicate them to another region using an auto-failover group (step 1). The ongoing cost of the
disaster recovery configuration in this scenario is equal to the total cost of the secondary databases. This
configuration is illustrated on the next diagram.
If an outage occurs in the primary region, the recovery steps to bring your application online are illustrated by
the next diagram.
The failover group initiates automatic failover of the management database to the DR region. The application
is automatically reconnected to the new primary and all new accounts and tenant databases are created in
the DR region. The existing customers see their data temporarily unavailable.
Create the elastic pool with the same configuration as the original pool (2).
Use geo-restore to create copies of the tenant databases (3). You can consider triggering the individual
restores by the end-user connections or use some other application-specific priority scheme.
At this point your application is back online in the DR region, but some customers experience delay when
accessing their data.

If the outage was temporary, it is possible that the primary region is recovered by Azure before all the database
restores are complete in the DR region. In this case, orchestrate moving the application back to the primary
region. The process takes the steps illustrated on the next diagram.
Cancel all outstanding geo-restore requests.
Fail over the management databases to the primary region (5). After the region’s recovery, the old primaries
have automatically become secondaries. Now they switch roles again.
Change the application's connection string to point back to the primary region. Now all new accounts and
tenant databases are created in the primary region. Some existing customers see their data temporarily
unavailable.
Set all databases in the DR pool to read-only to ensure they cannot be modified in the DR region (6).
For each database in the DR pool that has changed since the recovery, rename or delete the corresponding
databases in the primary pool (7).
Copy the updated databases from the DR pool to the primary pool (8).
Delete the DR pool (9)
At this point your application is online in the primary region with all tenant databases available in the primary
pool.

Benefit
The key benefit of this strategy is low ongoing cost for data tier redundancy. Azure SQL Database automatically
backs up databases with no application rewrite at no additional cost. The cost is incurred only when the elastic
databases are restored.
Trade -off
The trade-off is that the complete recovery of all tenant databases takes significant time. The length of time
depends on the total number of restores you initiate in the DR region and overall size of the tenant databases.
Even if you prioritize some tenants' restores over others, you are competing with all the other restores that are
initiated in the same region as the service arbitrates and throttles to minimize the overall impact on the existing
customers' databases. In addition, the recovery of the tenant databases cannot start until the new elastic pool in
the DR region is created.

Scenario 2. Mature application with tiered service


I am a mature SaaS application with tiered service offers and different SLAs for trial customers and for paying
customers. For the trial customers, I have to reduce the cost as much as possible. Trial customers can take
downtime but I want to reduce its likelihood. For the paying customers, any downtime is a flight risk. So I want
to make sure that paying customers are always able to access their data.
To support this scenario, separate the trial tenants from paid tenants by putting them into separate elastic pools.
The trial customers have lower eDTU or vCores per tenant and lower SLA with a longer recovery time. The
paying customers are in a pool with higher eDTU or vCores per tenant and a higher SLA. To guarantee the
lowest recovery time, the paying customers' tenant databases are geo-replicated. This configuration is
illustrated on the next diagram.

As in the first scenario, the management databases are quite active so you use a single geo-replicated database
for it (1). This ensures the predictable performance for new customer subscriptions, profile updates, and other
management operations. The region in which the primaries of the management databases reside is the primary
region and the region in which the secondaries of the management databases reside is the DR region.
The paying customers’ tenant databases have active databases in the “paid” pool provisioned in the primary
region. Provision a secondary pool with the same name in the DR region. Each tenant is geo-replicated to the
secondary pool (2). This enables quick recovery of all tenant databases using failover.
If an outage occurs in the primary region, the recovery steps to bring your application online are illustrated in
the next diagram:
Immediately fail over the management databases to the DR region (3).
Change the application’s connection string to point to the DR region. Now all new accounts and tenant
databases are created in the DR region. The existing trial customers see their data temporarily unavailable.
Fail over the paid tenant's databases to the pool in the DR region to immediately restore their availability (4).
Since the failover is a quick metadata level change, consider an optimization where the individual failovers
are triggered on demand by the end-user connections.
If your secondary pool eDTU size or vCore value was lower than the primary because the secondary
databases only required the capacity to process the change logs while they were secondaries, immediately
increase the pool capacity now to accommodate the full workload of all tenants (5).
Create the new elastic pool with the same name and the same configuration in the DR region for the trial
customers' databases (6).
Once the trial customers’ pool is created, use geo-restore to restore the individual trial tenant databases into
the new pool (7). Consider triggering the individual restores by the end-user connections or use some other
application-specific priority scheme.
At this point your application is back online in the DR region. All paying customers have access to their data
while the trial customers experience delay when accessing their data.
When the primary region is recovered by Azure after you have restored the application in the DR region you can
continue running the application in that region or you can decide to fail back to the primary region. If the
primary region is recovered before the failover process is completed, consider failing back right away. The
failback takes the steps illustrated in the next diagram:

Cancel all outstanding geo-restore requests.


Fail over the management databases (8). After the region’s recovery, the old primary automatically become
the secondary. Now it becomes the primary again.
Fail over the paid tenant databases (9). Similarly, after the region’s recovery, the old primaries automatically
become the secondaries. Now they become the primaries again.
Set the restored trial databases that have changed in the DR region to read-only (10).
For each database in the trial customers DR pool that changed since the recovery, rename or delete the
corresponding database in the trial customers primary pool (11).
Copy the updated databases from the DR pool to the primary pool (12).
Delete the DR pool (13).
NOTE
The failover operation is asynchronous. To minimize the recovery time it is important that you execute the tenant
databases' failover command in batches of at least 20 databases.

Benefit
The key benefit of this strategy is that it provides the highest SLA for the paying customers. It also guarantees
that the new trials are unblocked as soon as the trial DR pool is created.
Trade -off
The trade-off is that this setup increases the total cost of the tenant databases by the cost of the secondary DR
pool for paid customers. In addition, if the secondary pool has a different size, the paying customers experience
lower performance after failover until the pool upgrade in the DR region is completed.

Scenario 3. Geographically distributed application with tiered service


I have a mature SaaS application with tiered service offers. I want to offer a very aggressive SLA to my paid
customers and minimize the risk of impact when outages occur because even brief interruption can cause
customer dissatisfaction. It is critical that the paying customers can always access their data. The trials are free
and an SLA is not offered during the trial period.
To support this scenario, use three separate elastic pools. Provision two equal size pools with high eDTUs or
vCores per database in two different regions to contain the paid customers' tenant databases. The third pool
containing the trial tenants can have lower eDTUs or vCores per database and be provisioned in one of the two
regions.
To guarantee the lowest recovery time during outages, the paying customers' tenant databases are geo-
replicated with 50% of the primary databases in each of the two regions. Similarly, each region has 50% of the
secondary databases. This way, if a region is offline, only 50% of the paid customers' databases are impacted
and have to fail over. The other databases remain intact. This configuration is illustrated in the following
diagram:

As in the previous scenarios, the management databases are quite active so configure them as single geo-
replicated databases (1). This ensures the predictable performance of the new customer subscriptions, profile
updates and other management operations. Region A is the primary region for the management databases and
the region B is used for recovery of the management databases.
The paying customers’ tenant databases are also geo-replicated but with primaries and secondaries split
between region A and region B (2). This way, the tenant primary databases impacted by the outage can fail over
to the other region and become available. The other half of the tenant databases are not be impacted at all.
The next diagram illustrates the recovery steps to take if an outage occurs in region A.

Immediately fail over the management databases to region B (3).


Change the application’s connection string to point to the management databases in region B. Modify the
management databases to make sure the new accounts and tenant databases are created in region B and the
existing tenant databases are found there as well. The existing trial customers see their data temporarily
unavailable.
Fail over the paid tenant's databases to pool 2 in region B to immediately restore their availability (4). Since
the failover is a quick metadata level change, you may consider an optimization where the individual
failovers are triggered on demand by the end-user connections.
Since now pool 2 contains only primary databases, the total workload in the pool increases and can
immediately increase its eDTU size (5) or number of vCores.
Create the new elastic pool with the same name and the same configuration in the region B for the trial
customers' databases (6).
Once the pool is created use geo-restore to restore the individual trial tenant database into the pool (7). You
can consider triggering the individual restores by the end-user connections or use some other application-
specific priority scheme.

NOTE
The failover operation is asynchronous. To minimize the recovery time, it is important that you execute the tenant
databases' failover command in batches of at least 20 databases.

At this point your application is back online in region B. All paying customers have access to their data while the
trial customers experience delay when accessing their data.
When region A is recovered you need to decide if you want to use region B for trial customers or failback to
using the trial customers pool in region A. One criteria could be the % of trial tenant databases modified since
the recovery. Regardless of that decision, you need to re-balance the paid tenants between two pools. the next
diagram illustrates the process when the trial tenant databases fail back to region A.

Cancel all outstanding geo-restore requests to trial DR pool.


Fail over the management database (8). After the region’s recovery, the old primary automatically became
the secondary. Now it becomes the primary again.
Select which paid tenant databases fail back to pool 1 and initiate failover to their secondaries (9). After the
region’s recovery, all databases in pool 1 automatically became secondaries. Now 50% of them become
primaries again.
Reduce the size of pool 2 to the original eDTU (10) or number of vCores.
Set all restored trial databases in the region B to read-only (11).
For each database in the trial DR pool that has changed since the recovery, rename or delete the
corresponding database in the trial primary pool (12).
Copy the updated databases from the DR pool to the primary pool (13).
Delete the DR pool (14).
Benefit
The key benefits of this strategy are:
It supports the most aggressive SLA for the paying customers because it ensures that an outage cannot
impact more than 50% of the tenant databases.
It guarantees that the new trials are unblocked as soon as the trail DR pool is created during the recovery.
It allows more efficient use of the pool capacity as 50% of secondary databases in pool 1 and pool 2 are
guaranteed to be less active than the primary databases.
Trade -offs
The main trade-offs are:
The CRUD operations against the management databases have lower latency for the end users connected to
region A than for the end users connected to region B as they are executed against the primary of the
management databases.
It requires more complex design of the management database. For example, each tenant record has a
location tag that needs to be changed during failover and failback.
The paying customers may experience lower performance than usual until the pool upgrade in region B is
completed.
Summary
This article focuses on the disaster recovery strategies for the database tier used by a SaaS ISV multi-tenant
application. The strategy you choose is based on the needs of the application, such as the business model, the
SLA you want to offer to your customers, budget constraint etc. Each described strategy outlines the benefits
and trade-off so you could make an informed decision. Also, your specific application likely includes other Azure
components. So you review their business continuity guidance and orchestrate the recovery of the database tier
with them. To learn more about managing recovery of database applications in Azure, refer to Designing cloud
solutions for disaster recovery.

Next steps
To learn about Azure SQL Database automated backups, see Azure SQL Database automated backups.
For a business continuity overview and scenarios, see Business continuity overview.
To learn about using automated backups for recovery, see restore a database from the service-initiated
backups.
To learn about faster recovery options, see Active geo-replication and Auto-failover groups.
To learn about using automated backups for archiving, see database copy.
Manage rolling upgrades of cloud applications by
using SQL Database active geo-replication
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Learn how to use active geo-replication in Azure SQL Database to enable rolling upgrades of your cloud
application. Because upgrades are disruptive operations, they should be part of your business-continuity
planning and design. In this article, we look at two different methods of orchestrating the upgrade process and
discuss the benefits and tradeoffs of each option. For the purposes of this article, we refer to an application that
consists of a website that's connected to a single database as its data tier. Our goal is to upgrade version 1 (V1)
of the application to version 2 (V2) without any significant impact on the user experience.
When evaluating upgrade options, consider these factors:
Impact on application availability during upgrades, such as how long application functions might be limited
or degraded.
Ability to roll back if the upgrade fails.
Vulnerability of the application if an unrelated, catastrophic failure occurs during the upgrade.
Total dollar cost. This factor includes additional database redundancy and incremental costs of the temporary
components used by the upgrade process.

Upgrade applications that rely on database backups for disaster


recovery
If your application relies on automatic database backups and uses geo-restore for disaster recovery, it's
deployed to a single Azure region. To minimize user disruption, create a staging environment in that region with
all the application components involved in the upgrade. The first diagram illustrates the operational
environment before the upgrade process. The endpoint contoso.azurewebsites.net represents a production
environment of the web app. To be able to roll back the upgrade, you must create a staging environment with a
fully synchronized copy of the database. Follow these steps to create a staging environment for the upgrade:
1. Create a secondary database in the same Azure region. Monitor the secondary to see if the seeding process
is complete (1).
2. Create a new environment for your web app and call it 'Staging'. It will be registered in Azure DNS with the
URL contoso-staging.azurewebsites.net (2).

NOTE
These preparation steps won't impact the production environment, which can function in full-access mode.
When the preparation steps are complete, the application is ready for the actual upgrade. The next diagram
illustrates the steps involved in the upgrade process:
1. Set the primary database to read-only mode (3). This mode guarantees that the production environment of
the web app (V1) remains read-only during the upgrade, thus preventing data divergence between the V1
and V2 database instances.
2. Disconnect the secondary database by using the planned termination mode (4). This action creates a fully
synchronized, independent copy of the primary database. This database will be upgraded.
3. Turn the secondary database to read-write mode and run the upgrade script (5).
If the upgrade finishes successfully, you're now ready to switch users to the upgraded copy the application,
which becomes a production environment. Switching involves a few more steps, as illustrated in the next
diagram:
1. Activate a swap operation between production and staging environments of the web app (6). This operation
switches the URLs of the two environments. Now contoso.azurewebsites.net points to the V2 version of the
web site and the database (production environment).
2. If you no longer need the V1 version, which became a staging copy after the swap, you can decommission
the staging environment (7).
If the upgrade process is unsuccessful (for example, due to an error in the upgrade script), consider the staging
environment to be compromised. To roll back the application to the pre-upgrade state, revert the application in
the production environment to full access. The next diagram shows the reversion steps:
1. Set the database copy to read-write mode (8). This action restores the full V1 functionality of the production
copy.
2. Perform the root-cause analysis and decommission the staging environment (9).
At this point, the application is fully functional, and you can repeat the upgrade steps.

NOTE
The rollback doesn't require DNS changes because you did not yet perform a swap operation.
The key advantage of this option is that you can upgrade an application in a single region by following a set of
simple steps. The dollar cost of the upgrade is relatively low.
The main tradeoff is that, if a catastrophic failure occurs during the upgrade, the recovery to the pre-upgrade
state involves redeploying the application in a different region and restoring the database from backup by using
geo-restore. This process results in significant downtime.

Upgrade applications that rely on database geo-replication for


disaster recovery
If your application uses active geo-replication or auto-failover groups for business continuity, it's deployed to at
least two different regions. There's an active, primary database in a primary region and a read-only, secondary
database in a backup region. Along with the factors mentioned at the beginning of this article, the upgrade
process must also guarantee that:
The application remains protected from catastrophic failures at all times during the upgrade process.
The geo-redundant components of the application are upgraded in parallel with the active components.
To achieve these goals, in addition to using the Web Apps environments, you'll take advantage of Azure Traffic
Manager by using a failover profile with one active endpoint and one backup endpoint. The next diagram
illustrates the operational environment prior to the upgrade process. The web sites contoso-1.azurewebsites.net
and contoso-dr.azurewebsites.net represent a production environment of the application with full geographic
redundancy. The production environment includes the following components:
The production environment of the web app contoso-1.azurewebsites.net in the primary region (1)
The primary database in the primary region (2)
A standby instance of the web app in the backup region (3)
The geo-replicated secondary database in the backup region (4)
A Traffic Manager performance profile with an online endpoint called contoso-1.azurewebsites.net and an
offline endpoint called contoso-dr.azurewebsites.net
To make it possible to roll back the upgrade, you must create a staging environment with a fully synchronized
copy of the application. Because you need to ensure that the application can quickly recover in case a
catastrophic failure occurs during the upgrade process, the staging environment must be geo-redundant also.
The following steps are required to create a staging environment for the upgrade:
1. Deploy a staging environment of the web app in the primary region (6).
2. Create a secondary database in the primary Azure region (7). Configure the staging environment of the web
app to connect to it.
3. Create another geo-redundant, secondary database in the backup region by replicating the secondary
database in the primary region. (This method is called chained geo-replication.) (8).
4. Deploy a staging environment of the web app instance in the backup region (9) and configure it to connect
the geo-redundant secondary database created at (8).

NOTE
These preparation steps won't impact the application in the production environment. It will remain fully functional in read-
write mode.
When the preparation steps are complete, the staging environment is ready for the upgrade. The next diagram
illustrates these upgrade steps:
1. Set the primary database in the production environment to read-only mode (10). This mode guarantees that
the production database (V1) won't change during the upgrade, thus preventing the data divergence
between the V1 and V2 database instances.

-- Set the production database to read-only mode


ALTER DATABASE [<Prod_DB>]
SET READ_ONLY

2. Terminate geo-replication by disconnecting the secondary (11). This action creates an independent but fully
synchronized copy of the production database. This database will be upgraded. The following example uses
Transact-SQL but PowerShell is also available.
-- Disconnect the secondary, terminating geo-replication
ALTER DATABASE [<Prod_DB>]
REMOVE SECONDARY ON SERVER [<Partner-Server>]

3. Run the upgrade script against contoso-1-staging.azurewebsites.net , contoso-dr-staging.azurewebsites.net ,


and the staging primary database (12). The database changes will be replicated automatically to the staging
secondary.

If the upgrade finishes successfully, you're now ready to switch users to the V2 version of the application. The
next diagram illustrates the steps involved:
1. Activate a swap operation between production and staging environments of the web app in the primary
region (13) and in the backup region (14). V2 of the application now becomes a production environment,
with a redundant copy in the backup region.
2. If you no longer need the V1 application (15 and 16), you can decommission the staging environment.
If the upgrade process is unsuccessful (for example, due to an error in the upgrade script), consider the staging
environment to be in an inconsistent state. To roll back the application to the pre-upgrade state, revert to using
V1 of the application in the production environment. The required steps are shown on the next diagram:
1. Set the primary database copy in the production environment to read-write mode (17). This action restores
full V1 functionality in the production environment.
2. Perform the root-cause analysis and repair or remove the staging environment (18 and 19).
At this point, the application is fully functional, and you can repeat the upgrade steps.

NOTE
The rollback doesn't require DNS changes because you didn't perform a swap operation.
The key advantage of this option is that you can upgrade both the application and its geo-redundant copy in
parallel without compromising your business continuity during the upgrade.
The main tradeoff is that it requires double redundancy of each application component and therefore incurs
higher dollar cost. It also involves a more complicated workflow.

Summary
The two upgrade methods described in the article differ in complexity and dollar cost, but they both focus on
minimizing how long the user is limited to read-only operations. That time is directly defined by the duration of
the upgrade script. It doesn't depend on the database size, the service tier you chose, the website configuration,
or other factors that you can't easily control. All preparation steps are decoupled from the upgrade steps and
don't impact the production application. The efficiency of the upgrade script is a key factor that determines the
user experience during upgrades. So, the best way to improve that experience is to focus your efforts on making
the upgrade script as efficient as possible.
Next steps
For a business continuity overview and scenarios, see Business continuity overview.
To learn about Azure SQL Database active geo-replication, see Create readable secondary databases using
active geo-replication.
To learn about Azure SQL Database auto-failover groups, see Use auto-failover groups to enable transparent
and coordinated failover of multiple databases.
To learn about staging environments in Azure App Service, see Set up staging environments in Azure App
Service.
To learn about Azure Traffic Manager profiles, see Manage an Azure Traffic Manager profile.
Connect to SQL Database using C and C++
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This post is aimed at C and C++ developers trying to connect to Azure SQL Database. It is broken down into
sections so you can jump to the section that best captures your interest.

Prerequisites for the C/C++ tutorial


Make sure you have the following items:
An active Azure account. If you don't have one, you can sign up for a Free Azure Trial.
Visual Studio. You must install the C++ language components to build and run this sample.
Visual Studio Linux Development. If you are developing on Linux, you must also install the Visual Studio
Linux extension.

Azure SQL Database and SQL Server on virtual machines


Azure SQL Database is built on Microsoft SQL Server and is designed to provide a high-availability, performant,
and scalable service. There are many benefits to using Azure SQL over your proprietary database running on
premises. With Azure SQL you don't have to install, set up, maintain, or manage your database but only the
content and the structure of your database. Typical things that we worry about with databases like fault
tolerance and redundancy are all built in.
Azure currently has two options for hosting SQL server workloads: Azure SQL Database, database as a service
and SQL server on Virtual Machines (VM). We will not get into detail about the differences between these two
except that Azure SQL Database is your best bet for new cloud-based applications to take advantage of the cost
savings and performance optimization that cloud services provide. If you are considering migrating or
extending your on-premises applications to the cloud, SQL server on Azure virtual machine might work out
better for you. To keep things simple for this article, let's create an Azure SQL Database.

Data access technologies: ODBC and OLE DB


Connecting to Azure SQL Database is no different and currently there are two ways to connect to databases:
ODBC (Open Database connectivity) and OLE DB (Object Linking and Embedding database). In recent years,
Microsoft has aligned with ODBC for native relational data access. ODBC is relatively simple, and also much
faster than OLE DB. The only caveat here is that ODBC does use an old C-style API.

Step 1: Creating your Azure SQL Database


See the getting started page to learn how to create a sample database. Alternatively, you can follow this short
two-minute video to create an Azure SQL Database using the Azure portal.

Step 2: Get connection string


After your Azure SQL Database has been provisioned, you need to carry out the following steps to determine
connection information and add your client IP for firewall access.
In Azure portal, go to your Azure SQL Database ODBC connection string by using the Show database
connection strings listed as a part of the overview section for your database:
Copy the contents of the ODBC (Includes Node.js) [SQL authentication] string. We use this string later to
connect from our C++ ODBC command-line interpreter. This string provides details such as the driver, server,
and other database connection parameters.

Step 3: Add your IP to the firewall


Go to the firewall section for your server and add your client IP to the firewall using these steps to make sure we
can establish a successful connection:

At this point, you have configured your Azure SQL Database and are ready to connect from your C++ code.

Step 4: Connecting from a Windows C/C++ application


You can easily connect to your Azure SQL Database using ODBC on Windows using this sample that builds with
Visual Studio. The sample implements an ODBC command-line interpreter that can be used to connect to our
Azure SQL Database. This sample takes either a Database source name file (DSN) file as a command-line
argument or the verbose connection string that we copied earlier from the Azure portal. Bring up the property
page for this project and paste the connection string as a command argument as shown here:
Make sure you provide the right authentication details for your database as a part of that database connection
string.
Launch the application to build it. You should see the following window validating a successful connection. You
can even run some basic SQL commands like create table to validate your database connectivity:

Alternatively, you could create a DSN file using the wizard that is launched when no command arguments are
provided. We recommend that you try this option as well. You can use this DSN file for automation and
protecting your authentication settings:

Congratulations! You have now successfully connected to Azure SQL using C++ and ODBC on Windows. You
can continue reading to do the same for Linux platform as well.

Step 5: Connecting from a Linux C/C++ application


In case you haven't heard the news yet, Visual Studio now allows you to develop C++ Linux application as well.
You can read about this new scenario in the Visual C++ for Linux Development blog. To build for Linux, you need
a remote machine where your Linux distro is running. If you don't have one available, you can set one up quickly
using Linux Azure Virtual machines.
For this tutorial, let us assume that you have an Ubuntu 16.04 Linux distribution set up. The steps here should
also apply to Ubuntu 15.10, Red Hat 6, and Red Hat 7.
The following steps install the libraries needed for SQL and ODBC for your distro:

sudo su
sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/mssql-ubuntu-test/ xenial main" >
/etc/apt/sources.list.d/mssqlpreview.list'
sudo apt-key adv --keyserver apt-mo.trafficmanager.net --recv-keys 417A0893
apt-get update
apt-get install msodbcsql
apt-get install unixodbc-dev-utf16 #this step is optional but recommended*

Launch Visual Studio. Under Tools -> Options -> Cross Platform -> Connection Manager, add a connection to
your Linux box:

After connection over SSH is established, create an Empty project (Linux) template:

You can then add a new C source file and replace it with this content. Using the ODBC APIs SQLAllocHandle,
SQLSetConnectAttr, and SQLDriverConnect, you should be able to initialize and establish a connection to your
database. Like with the Windows ODBC sample, you need to replace the SQLDriverConnect call with the details
from your database connection string parameters copied from the Azure portal previously.

retcode = SQLDriverConnect(
hdbc, NULL, "Driver=ODBC Driver 13 for SQL"
"Server;Server=<yourserver>;Uid=<yourusername>;Pwd=<"
"yourpassword>;database=<yourdatabase>",
SQL_NTS, outstr, sizeof(outstr), &outstrlen, SQL_DRIVER_NOPROMPT);

The last thing to do before compiling is to add odbc as a library dependency:

To launch your application, bring up the Linux Console from the Debug menu:

If your connection was successful, you should now see the current database name printed in the Linux Console:

Congratulations! You have successfully completed the tutorial and can now connect to your Azure SQL Database
from C++ on Windows and Linux platforms.

Get the complete C/C++ tutorial solution


You can find the GetStarted solution that contains all the samples in this article at GitHub:
ODBC C++ Windows sample, Download the Windows C++ ODBC Sample to connect to Azure SQL
ODBC C++ Linux sample, Download the Linux C++ ODBC Sample to connect to Azure SQL

Next steps
Review the SQL Database Development Overview
More information on the ODBC API Reference
Additional resources
Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database
Explore all the capabilities of SQL Database
Connect Excel to a database in Azure SQL
Database or Azure SQL Managed Instance, and
create a report
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can connect Excel to a database and then import data and create tables and charts based on values in the
database. In this tutorial you will set up the connection between Excel and a database table, save the file that
stores data and the connection information for Excel, and then create a pivot chart from the database values.
You'll need to create a database before you get started. If you don't have one, see Create a database in Azure
SQL Database and Create server-level IP firewall to get a database with sample data up and running in a few
minutes.
In this article, you'll import sample data into Excel from that article, but you can follow similar steps with your
own data.
You'll also need a copy of Excel. This article uses Microsoft Excel 2016.

Connect Excel and load data


1. To connect Excel to a database in SQL Database, open Excel and then create a new workbook or open an
existing Excel workbook.
2. In the menu bar at the top of the page, select the Data tab, select Get Data , select From Azure, and then
select From Azure SQL Database .
3. In the SQL Ser ver database dialog box, type the Ser ver name you want to connect to in the form
<servername>.database.windows.net . For example, msftestser ver.database.windows.net .
Optionally, enter in the name of your database. Select OK to open the credentials window.

4. In the SQL Ser ver database dialog box, select Database on the left side, and then enter in your User
Name and Password for the server you want to connect to. Select Connect to open the Navigator .
TIP
Depending on your network environment, you may not be able to connect or you may lose the connection if the
server doesn't allow traffic from your client IP address. Go to the Azure portal, click SQL servers, click your server,
click firewall under settings and add your client IP address. See How to configure firewall settings for details.

5. In the Navigator , select the database you want to work with from the list, select the tables or views you
want to work with (we chose vGetAllCategories ), and then select Load to move the data from your
database to your Excel spreadsheet.

Import the data into Excel and create a pivot chart


Now that you've established the connection, you have several different options with how to load the data. For
example, the following steps create a pivot chart based on the data found in your database in SQL Database.
1. Follow the steps in the previous section, but this time, instead of selecting Load , select Load to from the
Load drop-down.
2. Next, select how you want to view this data in your workbook. We chose PivotChar t . You can also
choose to create a New worksheet or to Add this data to a Data Model . For more information on
Data Models, see Create a data model in Excel.

The worksheet now has an empty pivot table and chart.


3. Under PivotTable Fields , select all the check-boxes for the fields you want to view.
TIP
If you want to connect other Excel workbooks and worksheets to the database, select the Data tab, and select Recent
Sources to launch the Recent Sources dialog box. From there, choose the connection you created from the list, and

then click Open .

Create a permanent connection using .odc file


To save the connection details permanently, you can create an .odc file and make this connection a selectable
option within the Existing Connections dialog box.
1. In the menu bar at the top of the page, select the Data tab, and then select Existing Connections to
launch the Existing Connections dialog box.
a. Select Browse for more to open the Select Data Source dialog box.
b. Select the +NewSqlSer verConnection.odc file and then select Open to open the Data
Connection Wizard .

2. In the Data Connection Wizard , type in your server name and your SQL Database credentials. Select
Next .
a. Select the database that contains your data from the drop-down.
b. Select the table or view you're interested in. We chose vGetAllCategories.
c. Select Next .

3. Select the location of your file, the File Name , and the Friendly Name in the next screen of the Data
Connection Wizard. You can also choose to save the password in the file, though this can potentially
expose your data to unwanted access. Select Finish when ready.

4. Select how you want to import your data. We chose to do a PivotTable. You can also modify the
properties of the connection by select Proper ties . Select OK when ready. If you did not choose to save
the password with the file, then you will be prompted to enter your credentials.

5. Verify that your new connection has been saved by expanding the Data tab, and selecting Existing
Connections .

Next steps
Learn how to Connect and query with SQL Server Management Studio for advanced querying and analysis.
Learn about the benefits of elastic pools.
Learn how to create a web application that connects to Azure SQL Database on the back-end.
Ports beyond 1433 for ADO.NET 4.5
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This topic describes the Azure SQL Database connection behavior for clients that use ADO.NET 4.5 or a later
version.

IMPORTANT
For information about connectivity architecture, see Azure SQL Database connectivity architecture.

Outside vs inside
For connections to Azure SQL Database, we must first ask whether your client program runs outside or inside
the Azure cloud boundary. The subsections discuss two common scenarios.
Outside: Client runs on your desktop computer
Port 1433 is the only port that must be open on your desktop computer that hosts your SQL Database client
application.
Inside: Client runs on Azure
When your client runs inside the Azure cloud boundary, it uses what we can call a direct route to interact with
SQL Database. After a connection is established, further interactions between the client and database involve no
Azure SQL Database Gateway.
The sequence is as follows:
1. ADO.NET 4.5 (or later) initiates a brief interaction with the Azure cloud, and receives a dynamically
identified port number.
The dynamically identified port number is in the range of 11000-11999.
2. ADO.NET then connects to SQL Database directly, with no middleware in between.
3. Queries are sent directly to the database, and results are returned directly to the client.
Ensure that the port ranges of 11000-11999 on your Azure client machine are left available for ADO.NET 4.5
client interactions with SQL Database.
In particular, ports in the range must be free of any other outbound blockers.
On your Azure VM, the Windows Firewall with Advanced Security controls the port settings.
You can use the firewall's user interface to add a rule for which you specify the TCP protocol along
with a port range with the syntax like 11000-11999 .

Version clarifications
This section clarifies the monikers that refer to product versions. It also lists some pairings of versions between
products.
ADO.NET
ADO.NET 4.0 supports the TDS 7.3 protocol, but not 7.4.
ADO.NET 4.5 and later supports the TDS 7.4 protocol.
ODBC
Microsoft SQL Server ODBC 11 or above
JDBC
Microsoft SQL Server JDBC 4.2 or above (JDBC 4.0 actually supports TDS 7.4 but does not implement
“redirection”)

Related links
ADO.NET 4.6 was released on July 20, 2015. A blog announcement from the .NET team is available here.
ADO.NET 4.5 was released on August 15, 2012. A blog announcement from the .NET team is available
here.
A blog post about ADO.NET 4.5.1 is available here.
Microsoft ODBC Driver 17 for SQL Server https://aka.ms/downloadmsodbcsql
Connect to Azure SQL Database V12 via Redirection
https://techcommunity.microsoft.com/t5/DataCAT/Connect-to-Azure-SQL-Database-V12-via-
Redirection/ba-p/305362
TDS protocol version list
SQL Database Development Overview
Azure SQL Database firewall
Multi-tenant SaaS database tenancy patterns
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article describes the various tenancy models available for a multi-tenant SaaS application.
When designing a multi-tenant SaaS application, you must carefully choose the tenancy model that best fits the
needs of your application. A tenancy model determines how each tenant's data is mapped to storage. Your
choice of tenancy model impacts application design and management. Switching to a different model later is
sometimes costly.

A. SaaS concepts and terminology


In the Software as a Service (SaaS) model, your company does not sell licenses to your software. Instead, each
customer makes rent payments to your company, making each customer a tenant of your company.
In return for paying rent, each tenant receives access to your SaaS application components, and has its data
stored in the SaaS system.
The term tenancy model refers to how tenants' stored data is organized:
Single-tenancy: Each database stores data from only one tenant.
Multi-tenancy: Each database stores data from multiple separate tenants (with mechanisms to protect data
privacy).
Hybrid tenancy models are also available.

B. How to choose the appropriate tenancy model


In general, the tenancy model does not impact the function of an application, but it likely impacts other aspects
of the overall solution. The following criteria are used to assess each of the models:
Scalability:
Number of tenants.
Storage per-tenant.
Storage in aggregate.
Workload.
Tenant isolation: Data isolation and performance (whether one tenant's workload impacts others).
Per-tenant cost: Database costs.
Development complexity:
Changes to schema.
Changes to queries (required by the pattern).
Operational complexity:
Monitoring and managing performance.
Schema management.
Restoring a tenant.
Disaster recovery.
Customizability: Ease of supporting schema customizations that are either tenant-specific or tenant
class-specific.
The tenancy discussion is focused on the data layer. But consider for a moment the application layer. The
application layer is treated as a monolithic entity. If you divide the application into many small components, your
choice of tenancy model might change. You could treat some components differently than others regarding both
tenancy and the storage technology or platform used.

C. Standalone single-tenant app with single-tenant database


Application level isolation
In this model, the whole application is installed repeatedly, once for each tenant. Each instance of the app is a
standalone instance, so it never interacts with any other standalone instance. Each instance of the app has only
one tenant, and therefore needs only one database. The tenant has the database all to itself.

Each app instance is installed in a separate Azure resource group. The resource group can belong to a
subscription that is owned by either the software vendor or the tenant. In either case, the vendor can manage
the software for the tenant. Each application instance is configured to connect to its corresponding database.
Each tenant database is deployed as a single database. This model provides the greatest database isolation. But
the isolation requires that sufficient resources be allocated to each database to handle its peak loads. Here it
matters that elastic pools cannot be used for databases deployed in different resource groups or to different
subscriptions. This limitation makes this standalone single-tenant app model the most expensive solution from
an overall database cost perspective.
Vendor management
The vendor can access all the databases in all the standalone app instances, even if the app instances are
installed in different tenant subscriptions. The access is achieved via SQL connections. This cross-instance access
can enable the vendor to centralize schema management and cross-database query for reporting or analytics
purposes. If this kind of centralized management is desired, a catalog must be deployed that maps tenant
identifiers to database URIs. Azure SQL Database provides a sharding library that is used together to provide a
catalog. The sharding library is formally named the Elastic Database Client Library.

D. Multi-tenant app with database-per-tenant


This next pattern uses a multi-tenant application with many databases, all being single-tenant databases. A new
database is provisioned for each new tenant. The application tier is scaled up vertically by adding more
resources per node. Or the app is scaled out horizontally by adding more nodes. The scaling is based on
workload, and is independent of the number or scale of the individual databases.

Customize for a tenant


Like the standalone app pattern, the use of single-tenant databases gives strong tenant isolation. In any app
whose model specifies only single-tenant databases, the schema for any one given database can be customized
and optimized for its tenant. This customization does not affect other tenants in the app. Perhaps a tenant might
need data beyond the basic data fields that all tenants need. Further, the extra data field might need an index.
With database-per-tenant, customizing the schema for one or more individual tenants is straightforward to
achieve. The application vendor must design procedures to carefully manage schema customizations at scale.
Elastic pools
When databases are deployed in the same resource group, they can be grouped into elastic pools. The pools
provide a cost-effective way of sharing resources across many databases. This pool option is cheaper than
requiring each database to be large enough to accommodate the usage peaks that it experiences. Even though
pooled databases share access to resources they can still achieve a high degree of performance isolation.
Azure SQL Database provides the tools necessary to configure, monitor, and manage the sharing. Both pool-
level and database-level performance metrics are available in the Azure portal, and through Azure Monitor logs.
The metrics can give great insights into both aggregate and tenant-specific performance. Individual databases
can be moved between pools to provide reserved resources to a specific tenant. These tools enable you to
ensure good performance in a cost effective manner.
Operations scale for database-per-tenant
Azure SQL Database has many management features designed to manage large numbers of databases at scale,
such as well over 100,000 databases. These features make the database-per-tenant pattern plausible.
For example, suppose a system has a 1000-tenant database as its only one database. The database might have
20 indexes. If the system converts to having 1000 single-tenant databases, the quantity of indexes rises to
20,000. In Azure SQL Database as part of Automatic tuning, the automatic indexing features are enabled by
default. Automatic indexing manages for you all 20,000 indexes and their ongoing create and drop
optimizations. These automated actions occur within an individual database, and they are not coordinated or
restricted by similar actions in other databases. Automatic indexing treats indexes differently in a busy database
than in a less busy database. This type of index management customization would be impractical at the
database-per-tenant scale if this huge management task had to be done manually.
Other management features that scale well include the following:
Built-in backups.
High availability.
On-disk encryption.
Performance telemetry.
Automation
The management operations can be scripted and offered through a devops model. The operations can even be
automated and exposed in the application.
For example, you could automate the recovery of a single tenant to an earlier point in time. The recovery only
needs to restore the one single-tenant database that stores the tenant. This restore has no impact on other
tenants, which confirms that management operations are at the finely granular level of each individual tenant.
E. Multi-tenant app with multi-tenant databases
Another available pattern is to store many tenants in a multi-tenant database. The application instance can have
any number of multi-tenant databases. The schema of a multi-tenant database must have one or more tenant
identifier columns so that the data from any given tenant can be selectively retrieved. Further, the schema might
require a few tables or columns that are used by only a subset of tenants. However, static code and reference
data is stored only once and is shared by all tenants.
Tenant isolation is sacrificed
Data: A multi-tenant database necessarily sacrifices tenant isolation. The data of multiple tenants is stored
together in one database. During development, ensure that queries never expose data from more than one
tenant. SQL Database supports row-level security, which can enforce that data returned from a query be scoped
to a single tenant.
Processing: A multi-tenant database shares compute and storage resources across all its tenants. The database
as a whole can be monitored to ensure it is performing acceptably. However, the Azure system has no built-in
way to monitor or manage the use of these resources by an individual tenant. Therefore, the multi-tenant
database carries an increased risk of encountering noisy neighbors, where the workload of one overactive
tenant impacts the performance experience of other tenants in the same database. Additional application-level
monitoring could monitor tenant-level performance.
Lower cost
In general, multi-tenant databases have the lowest per-tenant cost. Resource costs for a single database are
lower than for an equivalently sized elastic pool. In addition, for scenarios where tenants need only limited
storage, potentially millions of tenants could be stored in a single database. No elastic pool can contain millions
of databases. However, a solution containing 1000 databases per pool, with 1000 pools, could reach the scale of
millions at the risk of becoming unwieldy to manage.
Two variations of a multi-tenant database model are discussed in what follows, with the sharded multi-tenant
model being the most flexible and scalable.

F. Multi-tenant app with a single multi-tenant database


The simplest multi-tenant database pattern uses a single database to host data for all tenants. As more tenants
are added, the database is scaled up with more storage and compute resources. This scale up might be all that is
needed, although there is always an ultimate scale limit. However, long before that limit is reached the database
becomes unwieldy to manage.
Management operations that are focused on individual tenants are more complex to implement in a multi-
tenant database. And at scale these operations might become unacceptably slow. One example is a point-in-time
restore of the data for just one tenant.

G. Multi-tenant app with sharded multi-tenant databases


Most SaaS applications access the data of only one tenant at a time. This access pattern allows tenant data to be
distributed across multiple databases or shards, where all the data for any one tenant is contained in one shard.
Combined with a multi-tenant database pattern, a sharded model allows almost limitless scale.
Manage shards
Sharding adds complexity both to the design and operational management. A catalog is required in which to
maintain the mapping between tenants and databases. In addition, management procedures are required to
manage the shards and the tenant population. For example, procedures must be designed to add and remove
shards, and to move tenant data between shards. One way to scale is to by adding a new shard and populating it
with new tenants. At other times you might split a densely populated shard into two less-densely populated
shards. After several tenants have been moved or discontinued, you might merge sparsely populated shards
together. The merge would result in more cost-efficient resource utilization. Tenants might also be moved
between shards to balance workloads.
SQL Database provides a split/merge tool that works in conjunction with the sharding library and the catalog
database. The provided app can split and merge shards, and it can move tenant data between shards. The app
also maintains the catalog during these operations, marking affected tenants as offline prior to moving them.
After the move, the app updates the catalog again with the new mapping, and marking the tenant as back
online.
Smaller databases more easily managed
By distributing tenants across multiple databases, the sharded multi-tenant solution results in smaller databases
that are more easily managed. For example, restoring a specific tenant to a prior point in time now involves
restoring a single smaller database from a backup, rather than a larger database that contains all tenants. The
database size, and number of tenants per database, can be chosen to balance the workload and the
management efforts.
Tenant identifier in the schema
Depending on the sharding approach used, additional constraints may be imposed on the database schema. The
SQL Database split/merge application requires that the schema includes the sharding key, which typically is the
tenant identifier. The tenant identifier is the leading element in the primary key of all sharded tables. The tenant
identifier enables the split/merge application to quickly locate and move data associated with a specific tenant.
Elastic pool for shards
Sharded multi-tenant databases can be placed in elastic pools. In general, having many single-tenant databases
in a pool is as cost efficient as having many tenants in a few multi-tenant databases. Multi-tenant databases are
advantageous when there are a large number of relatively inactive tenants.
H. Hybrid sharded multi-tenant database model
In the hybrid model, all databases have the tenant identifier in their schema. The databases are all capable of
storing more than one tenant, and the databases can be sharded. So in the schema sense, they are all multi-
tenant databases. Yet in practice some of these databases contain only one tenant. Regardless, the quantity of
tenants stored in a given database has no effect on the database schema.
Move tenants around
At any time, you can move a particular tenant to its own multi-tenant database. And at any time, you can change
your mind and move the tenant back to a database that contains multiple tenants. You can also assign a tenant
to new single-tenant database when you provision the new database.
The hybrid model shines when there are large differences between the resource needs of identifiable groups of
tenants. For example, suppose that tenants participating in a free trial are not guaranteed the same high level of
performance that subscribing tenants are. The policy might be for tenants in the free trial phase to be stored in a
multi-tenant database that is shared among all the free trial tenants. When a free trial tenant subscribes to the
basic service tier, the tenant can be moved to another multi-tenant database that might have fewer tenants. A
subscriber that pays for the premium service tier could be moved to its own new single-tenant database.
Pools
In this hybrid model, the single-tenant databases for subscriber tenants can be placed in resource pools to
reduce database costs per tenant. This is also done in the database-per-tenant model.

I. Tenancy models compared


The following table summarizes the differences between the main tenancy models.

M EA SUREM EN T STA N DA LO N E A P P DATA B A SE- P ER- T EN A N T SH A RDED M ULT I- T EN A N T

Scale Medium Very high Unlimited


1-100s 1-100,000s 1-1,000,000s

Tenant isolation Very high High Low; except for any single
tenant (that is alone in an
MT db).

Database cost per tenant High; is sized for peaks. Low; pools used. Lowest, for small tenants in
MT DBs.

Performance monitoring Per-tenant only Aggregate + per-tenant Aggregate; although is per-


and management tenant only for singles.

Development complexity Low Low Medium; due to sharding.

Operational complexity Low-High. Individually Low-Medium. Patterns Low-High. Individual tenant


simple, complex at scale. address complexity at scale. management is complex.

Next steps
Deploy and explore a multi-tenant Wingtip application that uses the database-per-tenant SaaS model -
Azure SQL Database
Welcome to the Wingtip Tickets sample SaaS Azure SQL Database tenancy app
Video indexed and annotated for multi-tenant SaaS
app using Azure SQL Database
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article is an annotated index into the time locations of an 81 minute video about SaaS tenancy models or
patterns. This article enables you to skip backward or forward in the video to which portion interests you. The
video explains the major design options for a multi-tenant database application on Azure SQL Database. The
video includes demos, walkthroughs of management code, and at times more detail informed by experience
than might be in our written documentation.
The video amplifies information in our written documentation, found at:
Conceptual: Multi-tenant SaaS database tenancy patterns
Tutorials: The Wingtip Tickets SaaS application
The video and the articles describe the many phases of creating a multi-tenant application on Azure SQL
Database in the cloud. Special features of Azure SQL Database make it easier to develop and implement multi-
tenant apps that are both easier to manage and reliably performant.
We routinely update our written documentation. The video is not edited or updated, so eventually more of its
detail may become outdated.

Sequence of 38 time-indexed screenshots


This section indexes the time location for 38 discussions throughout the 81 minute video. Each time index is
annotated with a screenshot from the video, and sometimes with additional information.
Each time index is in the format of h:mm:ss. For instance, the second indexed time location, labeled Session
objectives , starts at the approximate time location of 0:03:11 .
Compact links to video indexed time locations
The following titles are links to their corresponding annotated sections later in this article:
1. (Star t) Welcome slide, 0:00:03
2. Session objectives, 0:03:11
3. Agenda, 0:04:17
4. Multi-tenant web app, 0:05:05
5. App web form in action, 0:05:55
6. Per-tenant cost (scale, isolation, recovery), 0:09:31
7. Database models for multi-tenant: pros and cons, 0:11:59
8. Hybrid model blends benefits of MT/ST, 0:13:01
9. Single-tenant vs multi-tenant: pros and cons, 0:16:44
10. Pools are cost-effective for unpredictable workloads, 0:19:36
11. Demo of database-per-tenant and hybrid ST/MT, 0:20:08
12. Live app form showing Dojo, 0:20:29
13. MYOB and not a DBA in sight, 0:28:54
14. MYOB elastic pool usage example, 0:29:40
15. Learning from MYOB and other ISVs, 0:31:36
16. Patterns compose into E2E SaaS scenario, 0:43:15
17. Canonical hybrid multi-tenant SaaS app, 0:47:33
18. Wingtip SaaS sample app, 0:48:10
19. Scenarios and patterns explored in the tutorials, 0:49:10
20. Demo of tutorials and GitHub repository, 0:50:18
21. GitHub repo Microsoft/WingtipSaaS, 0:50:38
22. Exploring the patterns, 0:56:20
23. Provisioning tenants and onboarding, 0:57:44
24. Provisioning tenants and application connection, 0:58:58
25. Demo of management scripts provisioning a single tenant, 0:59:43
26. PowerShell to provision and catalog, 1:00:02
27. T-SQL SELECT * FROM TenantsExtended, 1:03:30
28. Managing unpredictable tenant workloads, 1:04:36
29. Elastic pool monitoring, 1:06:39
30. Load generation and performance monitoring, 1:09:42
31. Schema management at scale, 1:10:33
32. Distributed query across tenant databases, 1:12:21
33. Demo of ticket generation, 1:12:32
34. SSMS adhoc analytics, 1:12:46
35. Extract tenant data into Azure Synapse Analytics, 1:16:32
36. Graph of daily sale distribution, 1:16:48
37. Wrap up and call to action, 1:19:52
38. Resources for more information, 1:20:42

Annotated index time locations in the video


Clicking any screenshot image takes you to the exact time location in the video.

1. (Start ) Welcome slide, 0:00:01


Learning from MYOB: Design patterns for SaaS applications on Azure SQL Database - BRK3120
Title: Learning from MYOB: Design patterns for SaaS applications on Azure SQL Database
Bill.Gibson@microsoft.com
Principal Program Manager, Azure SQL Database
Microsoft Ignite session BRK3120, Orlando, FL USA, October/11 2017

2. Session objectives, 0:01:53

Alternative models for multi-tenant apps, with pros and cons.


SaaS patterns to reduce development, management, and resource costs.
A sample app + scripts.
PaaS features + SaaS patterns make SQL Database a highly scalable, cost-efficient data platform for multi-
tenant SaaS.

3. Agenda, 0:04:09
4. Multi-tenant web app, 0:05:00

5. App web form in action, 0:05:39

6. Per-tenant cost (scale, isolation, recovery ), 0:06:58

7. Database models for multi-tenant: pros and cons, 0:09:52

8. Hybrid model blends benefits of MT/ST, 0:12:29


9. Single-tenant vs multi-tenant: pros and cons, 0:13:11

10. Pools are cost-effective for unpredictable workloads, 0:17:49

11. Demo of database-per-tenant and hybrid ST/MT, 0:19:59

12. Live app form showing Dojo, 0:20:10


13. MYOB and not a DBA in sight, 0:25:06

14. MYOB elastic pool usage example, 0:29:30

15. Learning from MYOB and other ISVs, 0:31:25

16. Patterns compose into E2E SaaS scenario, 0:31:42


17. Canonical hybrid multi-tenant SaaS app, 0:46:04

18. Wingtip SaaS sample app, 0:48:01

19. Scenarios and patterns explored in the tutorials, 0:49:00

20. Demo of tutorials and GitHub repository, 0:50:12


21. GitHub repo Microsoft/WingtipSaaS, 0:50:32

22. Exploring the patterns, 0:56:15

23. Provisioning tenants and onboarding, 0:56:19

24. Provisioning tenants and application connection, 0:57:52

25. Demo of management scripts provisioning a single tenant, 0:59:36


26. PowerShell to provision and catalog, 0:59:56

27. T-SQL SELECT * FROM TenantsExtended, 1:03:25

28. Managing unpredictable tenant workloads, 1:03:34

29. Elastic pool monitoring, 1:06:32

30. Load generation and performance monitoring, 1:09:37


31. Schema management at scale, 1:09:40

32. Distributed query across tenant databases, 1:11:18

33. Demo of ticket generation, 1:12:28

34. SSMS adhoc analytics, 1:12:35

35. Extract tenant data into Azure Synapse Analytics, 1:15:46


36. Graph of daily sale distribution, 1:16:38

37. Wrap up and call to action, 1:17:43

38. Resources for more information, 1:20:35

Blog post, May 22, 2017


Conceptual: Multi-tenant SaaS database tenancy patterns
Tutorials: The Wingtip Tickets SaaS application
GitHub repositories for flavors of the Wingtip Tickets SaaS tenancy application:
GitHub repo for - Standalone application model.
GitHub repo for - DB Per Tenant model.
GitHub repo for - Multi-Tenant DB model.

Next steps
First tutorial article
Multi-tenant applications with elastic database tools
and row-level security
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


Elastic database tools and row-level security (RLS) cooperate to enable scaling the data tier of a multi-tenant
application with Azure SQL Database. Together these technologies help you build an application that has a
highly scalable data tier. The data tier supports multi-tenant shards, and uses ADO.NET SqlClient or Entity
Framework . For more information, see Design Patterns for Multi-tenant SaaS Applications with Azure SQL
Database.
Elastic database tools enable developers to scale out the data tier with standard sharding practices, by
using .NET libraries and Azure service templates. Managing shards by using the Elastic Database Client
Library helps automate and streamline many of the infrastructural tasks typically associated with sharding.
Row-level security enables developers to safely store data for multiple tenants in the same database. RLS
security policies filter out rows that do not belong to the tenant executing a query. Centralizing the filter logic
inside the database simplifies maintenance and reduces the risk of a security error. The alternative of relying
on all client code to enforce security is risky.
By using these features together, an application can store data for multiple tenants in the same shard database. It
costs less per tenant when the tenants share a database. Yet the same application can also offer its premium
tenants the option of paying for their own dedicated single-tenant shard. One benefit of single-tenant isolation
is firmer performance guarantees. In a single-tenant database, there is no other tenant competing for resources.
The goal is to use the elastic database client library data-dependent routing APIs to automatically connect each
given tenant to the correct shard database. Only one shard contains particular TenantId value for the given
tenant. The TenantId is the sharding key. After the connection is established, an RLS security policy within the
database ensures that the given tenant can access only those data rows that contain its TenantId.

NOTE
The tenant identifier might consist of more than one column. For convenience is this discussion, we informally assume a
single-column TenantId.

Download the sample project


Prerequisites
Use Visual Studio (2012 or higher)
Create three databases in Azure SQL Database
Download sample project: Elastic DB Tools for Azure SQL - Multi-Tenant Shards
Fill in the information for your databases at the beginning of Program.cs
This project extends the one described in Elastic DB Tools for Azure SQL - Entity Framework Integration by
adding support for multi-tenant shard databases. The project builds a simple console application for creating
blogs and posts. The project includes four tenants, plus two multi-tenant shard databases. This configuration is
illustrated in the preceding diagram.
Build and run the application. This run bootstraps the elastic database tools' shard map manager, and performs
the following tests:
1. Using Entity Framework and LINQ, create a new blog and then display all blogs for each tenant
2. Using ADO.NET SqlClient, display all blogs for a tenant
3. Try to insert a blog for the wrong tenant to verify that an error is thrown
Notice that because RLS has not yet been enabled in the shard databases, each of these tests reveals a problem:
tenants are able to see blogs that do not belong to them, and the application is not prevented from inserting a
blog for the wrong tenant. The remainder of this article describes how to resolve these problems by enforcing
tenant isolation with RLS. There are two steps:
1. Application tier : Modify the application code to always set the current TenantId in the SESSION_CONTEXT
after opening a connection. The sample project already sets the TenantId this way.
2. Data tier : Create an RLS security policy in each shard database to filter rows based on the TenantId stored in
SESSION_CONTEXT. Create a policy for each of your shard databases, otherwise rows in multi-tenant shards
are not be filtered.

1. Application tier: Set TenantId in the SESSION_CONTEXT


First you connect to a shard database by using the data-dependent routing APIs of the elastic database client
library. The application still must tell the database which TenantId is using the connection. The TenantId tells the
RLS security policy which rows must be filtered out as belonging to other tenants. Store the current TenantId in
the SESSION_CONTEXT of the connection.
An alternative to SESSION_CONTEXT is to use CONTEXT_INFO. But SESSION_CONTEXT is a better option.
SESSION_CONTEXT is easier to use, it returns NULL by default, and it supports key-value pairs.
Entity Framework
For applications using Entity Framework, the easiest approach is to set the SESSION_CONTEXT within the
ElasticScaleContext override described in Data-dependent routing using EF DbContext. Create and execute a
SqlCommand that sets TenantId in the SESSION_CONTEXT to the shardingKey specified for the connection. Then
return the connection brokered through data-dependent routing. This way, you only need to write code once to
set the SESSION_CONTEXT.
// ElasticScaleContext.cs
// Constructor for data-dependent routing.
// This call opens a validated connection that is routed to the
// proper shard by the shard map manager.
// Note that the base class constructor call fails for an open connection
// if migrations need to be done and SQL credentials are used.
// This is the reason for the separation of constructors.
// ...
public ElasticScaleContext(ShardMap shardMap, T shardingKey, string connectionStr)
: base(
OpenDDRConnection(shardMap, shardingKey, connectionStr),
true) // contextOwnsConnection
{
}

public static SqlConnection OpenDDRConnection(


ShardMap shardMap,
T shardingKey,
string connectionStr)
{
// No initialization.
Database.SetInitializer<ElasticScaleContext<T>>(null);

// Ask shard map to broker a validated connection for the given key.
SqlConnection conn = null;
try
{
conn = shardMap.OpenConnectionForKey(
shardingKey,
connectionStr,
ConnectionOptions.Validate);

// Set TenantId in SESSION_CONTEXT to shardingKey


// to enable Row-Level Security filtering.
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText =
@"exec sp_set_session_context
@key=N'TenantId', @value=@shardingKey";
cmd.Parameters.AddWithValue("@shardingKey", shardingKey);
cmd.ExecuteNonQuery();

return conn;
}
catch (Exception)
{
if (conn != null)
{
conn.Dispose();
}
throw;
}
}
// ...

Now the SESSION_CONTEXT is automatically set with the specified TenantId whenever ElasticScaleContext is
invoked:
// Program.cs
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap, tenantId, connStrBldr.ConnectionString))
{
var query = from b in db.Blogs
orderby b.Name
select b;

Console.WriteLine("All blogs for TenantId {0}:", tenantId);


foreach (var item in query)
{
Console.WriteLine(item.Name);
}
}
});

ADO.NET SqlClient
For applications using ADO.NET SqlClient, create a wrapper function around method
ShardMap.OpenConnectionForKey. Have the wrapper automatically set TenantId in the SESSION_CONTEXT to
the current TenantId before returning a connection. To ensure that SESSION_CONTEXT is always set, you should
only open connections using this wrapper function.
// Program.cs
// Wrapper function for ShardMap.OpenConnectionForKey() that
// automatically sets SESSION_CONTEXT with the correct
// tenantId before returning a connection.
// As a best practice, you should only open connections using this method
// to ensure that SESSION_CONTEXT is always set before executing a query.
// ...
public static SqlConnection OpenConnectionForTenant(
ShardMap shardMap, int tenantId, string connectionStr)
{
SqlConnection conn = null;
try
{
// Ask shard map to broker a validated connection for the given key.
conn = shardMap.OpenConnectionForKey(
tenantId, connectionStr, ConnectionOptions.Validate);

// Set TenantId in SESSION_CONTEXT to shardingKey


// to enable Row-Level Security filtering.
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText =
@"exec sp_set_session_context
@key=N'TenantId', @value=@shardingKey";
cmd.Parameters.AddWithValue("@shardingKey", tenantId);
cmd.ExecuteNonQuery();

return conn;
}
catch (Exception)
{
if (conn != null)
{
conn.Dispose();
}
throw;
}
}

// ...

// Example query via ADO.NET SqlClient.


// If row-level security is enabled, only Tenant 4's blogs are listed.
SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (SqlConnection conn = OpenConnectionForTenant(
sharding.ShardMap, tenantId4, connStrBldr.ConnectionString))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = @"SELECT * FROM Blogs";

Console.WriteLine(@"--
All blogs for TenantId {0} (using ADO.NET SqlClient):", tenantId4);

SqlDataReader reader = cmd.ExecuteReader();


while (reader.Read())
{
Console.WriteLine("{0}", reader["Name"]);
}
}
});

2. Data tier: Create row-level security policy


Create a security policy to filter the rows each tenant can access
Now that the application is setting SESSION_CONTEXT with the current TenantId before querying, an RLS
security policy can filter queries and exclude rows that have a different TenantId.
RLS is implemented in Transact-SQL. A user-defined function defines the access logic, and a security policy binds
this function to any number of tables. For this project:
1. The function verifies that the application is connected to the database, and that the TenantId stored in the
SESSION_CONTEXT matches the TenantId of a given row.
The application is connected, rather than some other SQL user.
2. A FILTER predicate allows rows that meet the TenantId filter to pass through for SELECT, UPDATE, and
DELETE queries.
A BLOCK predicate prevents rows that fail the filter from being INSERTed or UPDATEd.
If SESSION_CONTEXT has not been set, the function returns NULL, and no rows are visible or able to
be inserted.
To enable RLS on all shards, execute the following T-SQL by using either Visual Studio (SSDT), SSMS, or the
PowerShell script included in the project. Or if you are using Elastic Database Jobs, you can automate execution
of this T-SQL on all shards.

CREATE SCHEMA rls; -- Separate schema to organize RLS objects.


GO

CREATE FUNCTION rls.fn_tenantAccessPredicate(@TenantId int)


RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
-- Use the user in your application's connection string.
-- Here we use 'dbo' only for demo purposes!
WHERE DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('dbo')
AND CAST(SESSION_CONTEXT(N'TenantId') AS int) = @TenantId;
GO

CREATE SECURITY POLICY rls.tenantAccessPolicy


ADD FILTER PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.Blogs,
ADD BLOCK PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.Blogs,
ADD FILTER PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.Posts,
ADD BLOCK PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.Posts;
GO

TIP
In a complex project you might need to add the predicate on hundreds of tables, which could be tedious. There is a helper
stored procedure that automatically generates a security policy, and adds a predicate on all tables in a schema. For more
information, see the blog post at Apply Row-Level Security to all tables - helper script (blog).

Now if you run the sample application again, tenants see only rows that belong to them. In addition, the
application cannot insert rows that belong to tenants other than the one currently connected to the shard
database. Also, the app cannot update the TenantId in any rows it can see. If the app attempts to do either, a
DbUpdateException is raised.
If you add a new table later, ALTER the security policy to add FILTER and BLOCK predicates on the new table.
ALTER SECURITY POLICY rls.tenantAccessPolicy
ADD FILTER PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.MyNewTable,
ADD BLOCK PREDICATE rls.fn_tenantAccessPredicate(TenantId) ON dbo.MyNewTable;
GO

Add default constraints to automatically populate TenantId for INSERTs


You can put a default constraint on each table to automatically populate the TenantId with the value currently
stored in SESSION_CONTEXT when inserting rows. An example follows.

-- Create default constraints to auto-populate TenantId with the


-- value of SESSION_CONTEXT for inserts.
ALTER TABLE Blogs
ADD CONSTRAINT df_TenantId_Blogs
DEFAULT CAST(SESSION_CONTEXT(N'TenantId') AS int) FOR TenantId;
GO

ALTER TABLE Posts


ADD CONSTRAINT df_TenantId_Posts
DEFAULT CAST(SESSION_CONTEXT(N'TenantId') AS int) FOR TenantId;
GO

Now the application does not need to specify a TenantId when inserting rows:

SqlDatabaseUtils.SqlRetryPolicy.ExecuteAction(() =>
{
using (var db = new ElasticScaleContext<int>(
sharding.ShardMap, tenantId, connStrBldr.ConnectionString))
{
// The default constraint sets TenantId automatically!
var blog = new Blog { Name = name };
db.Blogs.Add(blog);
db.SaveChanges();
}
});

NOTE
If you use default constraints for an Entity Framework project, it is recommended that you NOT include the TenantId
column in your EF data model. This recommendation is because Entity Framework queries automatically supply default
values that override the default constraints created in T-SQL that use SESSION_CONTEXT. To use default constraints in the
sample project, for instance, you should remove TenantId from DataClasses.cs (and run Add-Migration in the Package
Manager Console) and use T-SQL to ensure that the field only exists in the database tables. This way, EF does
automatically supply incorrect default values when inserting data.

(Optional) Enable a superuser to access all rows


Some applications may want to create a superuser who can access all rows. A superuser could enable reporting
across all tenants on all shards. Or a superuser could perform split-merge operations on shards that involve
moving tenant rows between databases.
To enable a superuser, create a new SQL user ( superuser in this example) in each shard database. Then alter the
security policy with a new predicate function that allows this user to access all rows. Such a function is given
next.
-- New predicate function that adds superuser logic.
CREATE FUNCTION rls.fn_tenantAccessPredicateWithSuperUser(@TenantId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
WHERE
(
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('dbo') -- Replace 'dbo'.
AND CAST(SESSION_CONTEXT(N'TenantId') AS int) = @TenantId
)
OR
(
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('superuser')
);
GO

-- Atomically swap in the new predicate function on each table.


ALTER SECURITY POLICY rls.tenantAccessPolicy
ALTER FILTER PREDICATE rls.fn_tenantAccessPredicateWithSuperUser(TenantId) ON dbo.Blogs,
ALTER BLOCK PREDICATE rls.fn_tenantAccessPredicateWithSuperUser(TenantId) ON dbo.Blogs,
ALTER FILTER PREDICATE rls.fn_tenantAccessPredicateWithSuperUser(TenantId) ON dbo.Posts,
ALTER BLOCK PREDICATE rls.fn_tenantAccessPredicateWithSuperUser(TenantId) ON dbo.Posts;
GO

Maintenance
Adding new shards : Execute the T-SQL script to enable RLS on any new shards, otherwise queries on these
shards are not be filtered.
Adding new tables : Add a FILTER and BLOCK predicate to the security policy on all shards whenever a new
table is created. Otherwise queries on the new table are not be filtered. This addition can be automated by
using a DDL trigger, as described in Apply Row-Level Security automatically to newly created tables (blog).

Summary
Elastic database tools and row-level security can be used together to scale out an application's data tier with
support for both multi-tenant and single-tenant shards. Multi-tenant shards can be used to store data more
efficiently. This efficiency is pronounced where a large number of tenants have only a few rows of data. Single-
tenant shards can support premium tenants which have stricter performance and isolation requirements. For
more information, see Row-Level Security reference.

Additional resources
What is an Azure elastic pool?
Scaling out with Azure SQL Database
Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database
Authentication in multitenant apps, using Azure AD and OpenID Connect
Tailspin Surveys application

Questions and Feature Requests


For questions, contact us on the Microsoft Q&A question page for SQL Database. And add any feature requests
to the SQL Database feedback forum.
The Wingtip Tickets SaaS application
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The same Wingtip Tickets SaaS application is implemented in each of three samples. The app is a simple event
listing and ticketing SaaS app targeting small venues - theaters, clubs, etc. Each venue is a tenant of the app, and
has its own data: venue details, lists of events, customers, ticket orders, etc. The app, together with the
management scripts and tutorials, showcases an end-to-end SaaS scenario. This includes provisioning tenants,
monitoring and managing performance, schema management, and cross-tenant reporting and analytics.

Three SaaS application and tenancy patterns


Three versions of the app are available; each explores a different database tenancy pattern on Azure SQL
Database. The first uses a standalone application per tenant with its own database. The second uses a multi-
tenant app with a database per tenant. The third sample uses a multi-tenant app with sharded multi-tenant
databases.

Each sample includes the application code, plus management scripts and tutorials that explore a range of design
and management patterns. Each sample deploys in less that five minutes. All three can be deployed side-by-side
so you can compare the differences in design and management.

Standalone application per tenant pattern


The standalone app per tenant pattern uses a single tenant application with a database for each tenant. Each
tenant’s app, including its database, is deployed into a separate Azure resource group. The resource group can
be deployed in the service provider’s subscription or the tenant’s subscription and managed by the provider on
the tenant’s behalf. The standalone app per tenant pattern provides the greatest tenant isolation, but is typically
the most expensive as there's no opportunity to share resources between multiple tenants. This pattern is well
suited to applications that might be more complex and which are deployed to smaller numbers of tenants. With
standalone deployments, the app can be customized for each tenant more easily than in other patterns.
Check out the tutorials and code on GitHub .../Microsoft/WingtipTicketsSaaS-StandaloneApp.

Database per tenant pattern


The database per tenant pattern is effective for service providers that are concerned with tenant isolation and
want to run a centralized service that allows cost-efficient use of shared resources. A database is created for
each venue, or tenant, and all the databases are centrally managed. Databases can be hosted in elastic pools to
provide cost-efficient and easy performance management, which leverages the unpredictable workload patterns
of the tenants. A catalog database holds the mapping between tenants and their databases. This mapping is
managed using the shard map management features of the Elastic Database Client Library, which provides
efficient connection management to the application.
Check out the tutorials and code on GitHub .../Microsoft/WingtipTicketsSaaS-DbPerTenant.

Sharded multi-tenant database pattern


Multi-tenant databases are effective for service providers looking for lower cost per tenant and okay with
reduced tenant isolation. This pattern allows packing large numbers of tenants into an individual database,
driving the cost-per-tenant down. Near infinite scale is possible by sharding the tenants across multiple
databases. A catalog database maps tenants to databases.
This pattern also allows a hybrid model in which you can optimize for cost with multiple tenants in a database,
or optimize for isolation with a single tenant in their own database. The choice can be made on a tenant-by-
tenant basis, either when the tenant is provisioned or later, with no impact on the application. This model can be
used effectively when groups of tenants need to be treated differently. For example, low-cost tenants can be
assigned to shared databases, while premium tenants can be assigned to their own databases.
Check out the tutorials and code on GitHub .../Microsoft/WingtipTicketsSaaS-MultiTenantDb.

Next steps
Conceptual descriptions
A more detailed explanation of the application tenancy patterns is available at Multi-tenant SaaS database
tenancy patterns
Tutorials and code
Standalone app per tenant:
Tutorials for standalone app.
Code for standalone app, on GitHub.
Database per tenant:
Tutorials for database per tenant.
Code for database per tenant, on GitHub.
Sharded multi-tenant:
Tutorials for sharded multi-tenant.
Code for sharded multi-tenant, on GitHub.
General guidance for working with Wingtip Tickets
sample SaaS apps
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article contains general guidance for running the Wingtip Tickets sample SaaS applications that use Azure
SQL Database.

Download and unblock the Wingtip Tickets SaaS scripts


Executable contents (scripts, dlls) may be blocked by Windows when zip files are downloaded from an external
source and extracted. When extracting the scripts from a zip file, follow the steps below to unblock the .zip
file before extracting . This ensures the scripts are allowed to run.
1. Browse to the Wingtip Tickets SaaS GitHub repo for the database tenancy pattern you wish to explore:
WingtipTicketsSaaS-StandaloneApp
WingtipTicketsSaaS-DbPerTenant
WingtipTicketsSaaS-MultiTenantDb
2. Click Clone or download .
3. Click Download zip and save the file.
4. Right-click the zip file, and select Proper ties . The zip file name will correspond to the repo name. (ex.
WingtipTicketsSaaS-DbPerTenant-master.zip)
5. On the General tab, select Unblock .
6. Click OK .
7. Extract the files.
Scripts are located in the ..\Learning Modules folder.

Working with the Wingtip Tickets PowerShell scripts


To get the most out of the sample you need to dive into the provided scripts. Use breakpoints and step through
the scripts as they execute and examine how the different SaaS patterns are implemented. To easily step through
the provided scripts and modules for the best understanding, we recommend using the PowerShell ISE.
Update the configuration file for your deployment
Edit the UserConfig.psm1 file with the resource group and user value that you set during deployment:
1. Open the PowerShell ISE and load ...\Learning Modules\UserConfig.psm1
2. Update ResourceGroupName and Name with the specific values for your deployment (on lines 10 and 11
only).
3. Save the changes!
Setting these values here simply keeps you from having to update these deployment-specific values in every
script.
Execute the scripts by pressing F5
Several scripts use $PSScriptRoot to navigate folders, and $PSScriptRoot is only evaluated when scripts are
executed by pressing F5 . Highlighting and running a selection (F8 ) can result in errors, so press F5 when
running scripts.
Step through the scripts to examine the implementation
The best way to understand the scripts is by stepping through them to see what they do. Check out the included
Demo- scripts that present an easy to follow high-level workflow. The Demo- scripts show the steps required
to accomplish each task, so set breakpoints and drill deeper into the individual calls to see implementation
details for the different SaaS patterns.
Tips for exploring and stepping through PowerShell scripts:
Open Demo- scripts in the PowerShell ISE.
Execute or continue with F5 (using F8 is not advised because $PSScriptRoot is not evaluated when running
selections of a script).
Place breakpoints by clicking or selecting a line and pressing F9 .
Step over a function or script call using F10 .
Step into a function or script call using F11 .
Step out of the current function or script call using Shift + F11 .

Explore database schema and execute SQL queries using SSMS


Use SQL Server Management Studio (SSMS) to connect and browse the application servers and databases.
The deployment initially has tenants and catalog servers to connect to. The naming of the servers depends on
the database tenancy pattern (see below for specifics).
Standalone application: servers for each tenant (ex. contosoconcerthall-<User> server) and catalog-sa-
<User>
Database per tenant: tenants1-dpt-<User> and catalog-dpt-<User> servers
Multi-tenant database: tenants1-mt-<User> and catalog-mt-<User> servers
To ensure a successful demo connection, all servers have a firewall rule allowing all IPs through.
1. Open SSMS and connect to the tenants. The server name depends on the database tenancy pattern
you've selected (see below for specifics):
Standalone application: servers of individual tenants (ex. contosoconcerthall-
<User>.database.windows.net)
Database per tenant: tenants1-dpt-<User>.database.windows.net
Multi-tenant database: tenants1-mt-<User>.database.windows.net
2. Click Connect > Database Engine...:

3. Demo credentials are: Login = developer, Password = P@ssword1


The image below demonstrates the login for the Database per tenant pattern.
4. Repeat steps 2-3 and connect to the catalog server (see below for specific server names based on the
database tenancy pattern selected)
Standalone application: catalog-sa-<User>.database.windows.net
Database per tenant: catalog-dpt-<User>.database.windows.net
Multi-tenant database: catalog-mt-<User>.database.windows.net
After successfully connecting you should see all servers. Your list of databases might be different, depending on
the tenants you have provisioned.
The image below demonstrates the log in for the Database per tenant pattern.

Next steps
Deploy the Wingtip Tickets SaaS Standalone Application
Deploy the Wingtip Tickets SaaS Database per Tenant application
Deploy the Wingtip Tickets SaaS Multi-tenant Database application
Deploy and explore a standalone single-tenant
application that uses Azure SQL Database
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you deploy and explore the Wingtip Tickets SaaS sample application developed using the
standalone application, or app-per-tenant, pattern. The application is designed to showcase features of Azure
SQL Database that simplify enabling multi-tenant SaaS scenarios.
The standalone application or app-per-tenant pattern deploys an application instance for each tenant. Each
application is configured for a specific tenant and deployed in a separate Azure resource group. Multiple
instances of the application are provisioned to provide a multi-tenant solution. This pattern is best suited to
smaller numbers, of tenants where tenant isolation is a top priority. Azure has partner programs that allow
resources to be deployed into a tenant’s subscription and managed by a service provider on the tenant’s behalf.
In this tutorial, you'll deploy three standalone applications for three tenants into your Azure subscription. You
have full access to explore and work with the individual application components.
The application source code and management scripts are available in the WingtipTicketsSaaS-StandaloneApp
GitHub repo. The application was created using Visual Studio 2015, and doesn't successfully open and compile
in Visual Studio 2019 without updating.
In this tutorial you learn:
How to deploy the Wingtip Tickets SaaS Standalone Application.
Where to get the application source code, and management scripts.
About the servers and databases that make up the app.
Additional tutorials will be released. They'll allow you to explore a range of management scenarios based on this
application pattern.

Deploy the Wingtip Tickets SaaS Standalone Application


Deploy the app for the three provided tenants:
1. Click each blue Deploy to Azure button to open the deployment template in the Azure portal. Each
template requires two parameter values; a name for a new resource group, and a user name that
distinguishes this deployment from other deployments of the app. The next step provides details for
setting these values.
Contoso Concer t Hall

Dogwood Dojo

Fabrikam Jazz Club

2. Enter required parameter values for each deployment.


IMPORTANT
Some authentication and server firewalls are intentionally unsecured for demonstration purposes. Create a new
resource group for each application deployment. Do not use an existing resource group. Do not use this
application, or any resources it creates, for production. Delete all the resource groups when you are finished with
the applications to stop related billing.

It's best to use only lowercase letters, numbers, and hyphens in your resource names.
For Resource group , select Create new, and then provide a lowercase Name for the resource
group. wingtip-sa-<venueName>-<user> is the recommended pattern. For <venueName>,
replace the venue name with no spaces. For <user>, replace the user value from below. With this
pattern, resource group names might be wingtip-sa-contosoconcerthall-af1, wingtip-sa-
dogwooddojo-af1, wingtip-sa-fabrikamjazzclub-af1.
Select a Location from the drop-down list.
For User - We recommend a short user value, such as your initials plus a digit: for example, af1.
3. Deploy the application .
Click to agree to the terms and conditions.
Click Purchase .
4. Monitor the status of all three deployments by clicking Notifications (the bell icon to the right of the
search box). Deploying the apps takes around five minutes.

Run the applications


The app showcases venues that host events. The venues are the tenants of the application. Each venue gets a
personalized web site to list their events and sell tickets. Venue types include concert halls, jazz clubs, and sports
clubs. In the sample, the type of venue determines the background photograph shown on the venue's web site.
In the standalone app model, each venue has a separate application instance with its own standalone Azure SQL
Database.
1. Open the events page for each of the three tenants in separate browser tabs:
http://events.contosoconcerthall.<user>.trafficmanager.net
http://events.dogwooddojo.<user>.trafficmanager.net
http://events.fabrikamjazzclub.<user>.trafficmanager.net
(In each URL, replace <user> with your deployment's user value.)
To control the distribution of incoming requests, the app uses Azure Traffic Manager. Each tenant-specific app
instance includes the tenant name as part of the domain name in the URL. All the tenant URLs include your
specific User value. The URLs follow the following format:
http://events.<venuename>.<user>.trafficmanager.net
Each tenant's database Location is included in the app settings of the corresponding deployed app.
In a production environment, typically you create a CNAME DNS record to point a company internet domain to
the URL of the traffic manager profile.

Explore the servers and tenant databases


Let’s look at some of the resources that were deployed:
1. In the Azure portal, browse to the list of resource groups.
2. You should see the three tenant resource groups.
3. Open the wingtip-sa-fabrikam-<user> resource group, which contains the resources for the Fabrikam
Jazz Club deployment. The fabrikamjazzclub-<user> server contains the fabrikamjazzclub database.
Each tenant database is a 50 DTU standalone database.

Additional resources
To learn about multi-tenant SaaS applications, see Design patterns for multi-tenant SaaS applications.

Delete resource groups to stop billing


When you have finished using the sample, delete all the resource groups you created to stop the associated
billing.

Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Standalone Application.
About the servers and databases that make up the app.
How to delete sample resources to stop related billing.
Next, try the Provision and Catalog tutorial in which you'll explore the use of a catalog of tenants that enables a
range of cross-tenant scenarios such as schema management and tenant analytics.
Provision and catalog new tenants using the
application per tenant SaaS pattern
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article covers the provisioning and cataloging of new tenants using the standalone app per tenant SaaS
pattern. This article has two major parts:
Conceptual discussion of provisioning and cataloging new tenants
A tutorial that highlights sample PowerShell code that accomplishes the provisioning and cataloging
The tutorial uses the Wingtip Tickets sample SaaS application, adapted to the standalone app per
tenant pattern.

Standalone application per tenant pattern


The standalone app per tenant pattern is one of several patterns for multi-tenant SaaS applications. In this
pattern, a standalone app is provisioned for each tenant. The application comprises application level
components and an Azure SQL Database. Each tenant app can be deployed in the vendor’s subscription.
Alternatively, Azure offers a managed applications program in which an app can be deployed in a tenant’s
subscription and managed by the vendor on the tenant’s behalf.

When deploying an application for a tenant, the app and database are provisioned in a new resource group
created for the tenant. Using separate resource groups isolates each tenant's application resources and allows
them to be managed independently. Within each resource group, each application instance is configured to
access its corresponding database directly. This connection model contrasts with other patterns that use a
catalog to broker connections between the app and the database. And as there is no resource sharing, each
tenant database must be provisioned with sufficient resources to handle its peak load. This pattern tends to be
used for SaaS applications with fewer tenants, where there is a strong emphasis on tenant isolation and less
emphasis on resource costs.
Using a tenant catalog with the application per tenant pattern
While each tenant’s app and database are fully isolated, various management and analytics scenarios may
operate across tenants. For example, applying a schema change for a new release of the application requires
changes to the schema of each tenant database. Reporting and analytics scenarios may also require access to all
the tenant databases regardless of where they are deployed.

The tenant catalog holds a mapping between a tenant identifier and a tenant database, allowing an identifier to
be resolved to a server and database name. In the Wingtip SaaS app, the tenant identifier is computed as a hash
of the tenant name, although other schemes could be used. While standalone applications don't need the
catalog to manage connections, the catalog can be used to scope other actions to a set of tenant databases. For
example, Elastic Query can use the catalog to determine the set of databases across which queries are
distributed for cross-tenant reporting.

Elastic Database Client Library


In the Wingtip sample application, the catalog is implemented by the shard management features of the Elastic
Database Client Library (EDCL). The library enables an application to create, manage, and use a shard map that
is stored in a database. In the Wingtip Tickets sample, the catalog is stored in the tenant catalog database. The
shard maps a tenant key to the shard (database) in which that tenant’s data is stored. EDCL functions manage a
global shard map stored in tables in the tenant catalog database and a local shard map stored in each shard.
EDCL functions can be called from applications or PowerShell scripts to create and manage the entries in the
shard map. Other EDCL functions can be used to retrieve the set of shards or connect to the correct database for
given tenant key.

IMPORTANT
Do not edit the data in the catalog database or the local shard map in the tenant databases directly. Direct updates are
not supported due to the high risk of data corruption. Instead, edit the mapping data by using EDCL APIs only.

Tenant provisioning
Each tenant requires a new Azure resource group, which must be created before resources can be provisioned
within it. Once the resource group exists, an Azure Resource Management template can be used to deploy the
application components and the database, and then configure the database connection. To initialize the database
schema, the template can import a bacpac file. Alternatively, the database can be created as a copy of a
‘template’ database. The database is then further updated with initial venue data and registered in the catalog.

Tutorial
In this tutorial you learn how to:
Provision a catalog
Register the sample tenant databases that you deployed earlier in the catalog
Provision an additional tenant and register it in the catalog
An Azure Resource Manager template is used to deploy and configure the application, create the tenant
database, and then import a bacpac file to initialize it. The import request may be queued for several minutes
before it is actioned.
At the end of this tutorial, you have a set of standalone tenant applications, with each database registered in the
catalog.

Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
The three sample tenant apps are deployed. To deploy these apps in less than five minutes, see Deploy and
explore the Wingtip Tickets SaaS Standalone Application pattern.

Provision the catalog


In this task, you learn how to provision the catalog used to register all the tenant databases. You will:
Provision the catalog database using an Azure resource management template. The database is
initialized by importing a bacpac file.
Register the sample tenant apps that you deployed earlier. Each tenant is registered using a key
constructed from a hash of the tenant name. The tenant name is also stored in an extension table in the
catalog.
1. In PowerShell ISE, open ...\Learning Modules\UserConfig.psm and update the <user> value to the value
you used when deploying the three sample applications. Save the file .
2. In PowerShell ISE, open ...\Learning Modules\ProvisionTenants\Demo-ProvisionAndCatalog.ps1 and set
$Scenario = 1 . Deploy the tenant catalog and register the pre-defined tenants.
3. Add a breakpoint by putting your cursor anywhere on the line that says, & $PSScriptRoot\New-Catalog.ps1
, and then press F9 .
4. Run the script by pressing F5 .
5. After script execution stops at the breakpoint, press F11 to step into the New-Catalog.ps1 script.
6. Trace the script's execution using the Debug menu options, F10 and F11, to step over or into called
functions.
For more information about debugging PowerShell scripts, see Tips on working with and debugging
PowerShell scripts.
Once the script completes, the catalog will exist and all the sample tenants will be registered.
Now look at the resources you created.
1. Open the Azure portal and browse the resource groups. Open the wingtip-sa-catalog-<user>
resource group and note the catalog server and database.
2. Open the database in the portal and select Data explorer from the left-hand menu. Click the Login
command and then enter the Password = P@ssword1 .
3. Explore the schema of the tenantcatalog database.
The objects in the __ShardManagement schema are all provided by the Elastic Database Client Library.
The Tenants table and TenantsExtended view are extensions added in the sample that demonstrate
how you can extend the catalog to provide additional value.
4. Run the query, SELECT * FROM dbo.TenantsExtended .
As an alternative to using the Data Explorer you can connect to the database from SQL Server
Management Studio. To do this, connect to the server wingtip-
Note that you should not edit data directly in the catalog - always use the shard management APIs.

Provision a new tenant application


In this task, you learn how to provision a single tenant application. You will:
Create a new resource group for the tenant.
Provision the application and database into the new resource group using an Azure resource
management template. This action includes initializing the database with common schema and reference
data by importing a bacpac file.
Initialize the database with basic tenant information . This action includes specifying the venue type,
which determines the photograph used as the background on its events web site.
Register the database in the catalog database .
1. In PowerShell ISE, open ...\Learning Modules\ProvisionTenants\Demo-ProvisionAndCatalog.ps1 and set
$Scenario = 2 . Deploy the tenant catalog and register the pre-defined tenants
2. Add a breakpoint in the script by putting your cursor anywhere on line 49 that says,
& $PSScriptRoot\New-TenantApp.ps1 , and then press F9 .

3. Run the script by pressing F5 .


4. After script execution stops at the breakpoint, press F11 to step into the New-Catalog.ps1 script.
5. Trace the script's execution using the Debug menu options, F10 and F11, to step over or into called
functions.
After the tenant has been provisioned, the new tenant's events website is opened.
You can then inspect the new resources created in the Azure portal.

To stop billing, delete resource groups


When you have finished exploring the sample, delete all the resource groups you created to stop the associated
billing.
Additional resources
To learn more about multi-tenant SaaS database applications, see Design patterns for multi-tenant SaaS
applications.

Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Standalone Application.
About the servers and databases that make up the app.
How to delete sample resources to stop related billing.
You can explore how the catalog is used to support various cross-tenant scenarios using the database-per-
tenant version of the Wingtip Tickets SaaS application.
Introduction to a multitenant SaaS app that uses the
database-per-tenant pattern with Azure SQL
Database
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The Wingtip SaaS application is a sample multitenant app. The app uses the database-per-tenant SaaS
application pattern to service multiple tenants. The app showcases features of Azure SQL Database that enable
SaaS scenarios by using several SaaS design and management patterns. To quickly get up and running, the
Wingtip SaaS app deploys in less than five minutes.
Application source code and management scripts are available in the WingtipTicketsSaaS-DbPerTenant GitHub
repo. Before you start, see the general guidance for steps to download and unblock the Wingtip Tickets
management scripts.

Application architecture
The Wingtip SaaS app uses the database-per-tenant model. It uses SQL elastic pools to maximize efficiency. For
provisioning and mapping tenants to their data, a catalog database is used. The core Wingtip SaaS application
uses a pool with three sample tenants, plus the catalog database. The catalog and tenant servers have been
provisioned with DNS aliases. These aliases are used to maintain a reference to the active resources used by the
Wingtip application. These aliases are updated to point to recovery resources in the disaster recovery tutorials.
Completing many of the Wingtip SaaS tutorials results in add-ons to the initial deployment. Add-ons such as
analytic databases and cross-database schema management are introduced.

As you go through the tutorials and work with the app, focus on the SaaS patterns as they relate to the data tier.
In other words, focus on the data tier, and don't overanalyze the app itself. Understanding the implementation of
these SaaS patterns is key to implementing these patterns in your applications. Also consider any necessary
modifications for your specific business requirements.
SQL Database Wingtip SaaS tutorials
After you deploy the app, explore the following tutorials that build on the initial deployment. These tutorials
explore common SaaS patterns that take advantage of built-in features of SQL Database, Azure Synapse
Analytics, and other Azure services. Tutorials include PowerShell scripts with detailed explanations. The
explanations simplify understanding and implementation of the same SaaS management patterns in your
applications.

T UTO RIA L DESC RIP T IO N

Guidance and tips for the SQL Database multitenant SaaS Download and run PowerShell scripts to prepare parts of the
app example application.

Deploy and explore the Wingtip SaaS application Deploy and explore the Wingtip SaaS application with your
Azure subscription.

Provision and catalog tenants Learn how the application connects to tenants by using a
catalog database, and how the catalog maps tenants to their
data.

Monitor and manage performance Learn how to use monitoring features of SQL Database and
set alerts when performance thresholds are exceeded.

Monitor with Azure Monitor logs Learn how to use Azure Monitor logs to monitor large
amounts of resources across multiple pools.

Restore a single tenant Learn how to restore a tenant database to a prior point in
time. Also learn how to restore to a parallel database, which
leaves the existing tenant database online.

Manage tenant database schema Learn how to update schema and update reference data
across all tenant databases.

Run cross-tenant distributed queries Create an ad hoc analytics database, and run real-time
distributed queries across all tenants.

Run analytics on extracted tenant data Extract tenant data into an analytics database or data
warehouse for offline analytics queries.

Next steps
General guidance and tips when you deploy and use the Wingtip Tickets SaaS app example
Deploy the Wingtip SaaS application
Deploy and explore a multitenant SaaS app that
uses the database-per-tenant pattern with Azure
SQL Database
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you deploy and explore the Wingtip Tickets SaaS database-per-tenant application (Wingtip). The
app uses a database-per-tenant pattern to store the data of multiple tenants. The app is designed to showcase
features of Azure SQL Database that simplify how to enable SaaS scenarios.
Five minutes after you select Deploy to Azure , you have a multitenant SaaS application. The app includes a
database that runs in Azure SQL Database. The app is deployed with three sample tenants, each with its own
database. All the databases are deployed into a SQL elastic pool. The app is deployed to your Azure subscription.
You have full access to explore and work with the individual components of the app. The application C# source
code and the management scripts are available in the WingtipTicketsSaaS-DbPerTenant GitHub repo.
In this tutorial you learn:
How to deploy the Wingtip SaaS application.
Where to get the application source code and management scripts.
About the servers, pools, and databases that make up the app.
How tenants are mapped to their data with the catalog.
How to provision a new tenant.
How to monitor tenant activity in the app.
A series of related tutorials offers to explore various SaaS design and management patterns. The tutorials build
beyond this initial deployment. When you use the tutorials, you can examine the provided scripts to see how the
different SaaS patterns are implemented. The scripts demonstrate how features of SQL Database simplify the
development of SaaS applications.

Prerequisites
To complete this tutorial, make sure Azure PowerShell is installed. For more information, see Get started with
Azure PowerShell.

Deploy the Wingtip Tickets SaaS application


Plan the names
In the steps of this section, you provide a user value that is used to make sure resource names are globally
unique. You also provide a name for the resource group that contains all the resources created by a deployment
of the app. For a fictitious person named Ann Finley, we suggest:
User : af1 is made up of Ann Finley's initials plus a digit. If you deploy the app a second time, use a different
value. An example is af2.
Resource group : wingtip-dpt-af1 indicates this is the database-per-tenant app. Append the user name af1
to correlate the resource group name with the names of the resources it contains.
Choose your names now, and write them down.
Steps
1. To open the Wingtip Tickets SaaS database-per-tenant deployment template in the Azure portal, select
Deploy to Azure .

2. Enter values in the template for the required parameters.

IMPORTANT
Some authentication and server firewalls are intentionally unsecured for demonstration purposes. We recommend
that you create a new resource group. Don't use existing resource groups, servers, or pools. Don't use this
application, scripts, or any deployed resources for production. Delete this resource group when you're finished
with the application to stop related billing.

Resource group : Select Create new , and provide the unique name you chose earlier for the
resource group.
Location : Select a location from the drop-down list.
User : Use the user name value you chose earlier.
3. Deploy the application.
a. Select to agree to the terms and conditions.
b. Select Purchase .
4. To monitor deployment status, select Notifications (the bell icon to the right of the search box).
Deploying the Wingtip Tickets SaaS app takes approximately five minutes.

Download and unblock the Wingtip Tickets management scripts


While the application deploys, download the source code and management scripts.

IMPORTANT
Executable contents (scripts and DLLs) might be blocked by Windows when .zip files are downloaded from an external
source and extracted. Follow the steps to unblock the .zip file before you extract the scripts. Unblocking makes sure the
scripts are allowed to run.

1. Browse to the WingtipTicketsSaaS-DbPerTenant GitHub repo.


2. Select Clone or download .
3. Select Download ZIP , and then save the file.
4. Right-click the WingtipTicketsSaaS-DbPerTenant-master.zip file, and then select Proper ties .
5. On the General tab, select Unblock > Apply .
6. Select OK , and extract the files
Scripts are located in the ...\WingtipTicketsSaaS-DbPerTenant-master\Learning Modules folder.
Update the user configuration file for this deployment
Before you run any scripts, update the resource group and user values in the User Config file. Set these variables
to the values you used during deployment.
1. In the PowerShell ISE, open ...\Learning Modules\UserConfig.psm1
2. Update ResourceGroupName and Name with the specific values for your deployment (on lines 10 and 11
only).
3. Save the changes.
These values are referenced in nearly every script.

Run the application


The app showcases venues that host events. Venue types include concert halls, jazz clubs, and sports clubs. In
Wingtip Tickets, venues are registered as tenants. Being a tenant gives a venue an easy way to list events and to
sell tickets to their customers. Each venue gets a personalized website to list their events and to sell tickets.
Internally in the app, each tenant gets a database deployed into an elastic pool.
A central Events Hub page provides a list of links to the tenants in your deployment.
1. Use the URL to open the Events Hub in your web browser: http://events.wingtip-dpt.
<user>.trafficmanager.net. Substitute <user> with your deployment's user value.

2. Select Fabrikam Jazz Club in the Events Hub.


Azure Traffic Manager
The Wingtip application uses Azure Traffic Manager to control the distribution of incoming requests. The URL to
access the events page for a specific tenant uses the following format:
http://events.wingtip-dpt.<user>.trafficmanager.net/fabrikamjazzclub
The parts of the preceding format are explained in the following table.

URL PA RT DESC RIP T IO N

events.wingtip-dpt The events parts of the Wingtip app.

-dpt distinguishes the database-per-tenant


implementation of Wingtip Tickets from other
implementations. Examples are the single app-per-tenant
(-sa) or multitenant database (-mt) implementations.

.<user> af1 in the example.

.trafficmanager.net/ Traffic Manager, base URL.

fabrikamjazzclub Identifies the tenant named Fabrikam Jazz Club.

The tenant name is parsed from the URL by the events app.
The tenant name is used to create a key.
The key is used to access the catalog to obtain the location of the tenant's database.
The catalog is implemented by using shard map management.
The Events Hub uses extended metadata in the catalog to construct the list-of-events page URLs for each
tenant.
In a production environment, typically you create a CNAME DNS record to point a company internet domain to
the Traffic Manager DNS name.

NOTE
It may not be immediately obvious what the use of the traffic manager is in this tutorial. The goal of this series of tutorials
is to showcase patterns that can handle the scale of a complex production environment. In such a case, for example, you
would have multiple web apps distributed across the globe, co-located with databases and you would need traffic
manager to route between these instances. Another set of tutorials that illustrates the use of traffic manager though are
the geo-restore and the geo-replication tutorials. In these tutorials, traffic manager is used to help to switch over to a
recovery instance of the SaaS app in the event of a regional outage.

Start generating load on the tenant databases


Now that the app is deployed, let's put it to work.
The Demo-LoadGenerator PowerShell script starts a workload that runs against all tenant databases. The real-
world load on many SaaS apps is sporadic and unpredictable. To simulate this type of load, the generator
produces a load with randomized spikes or bursts of activity on each tenant. The bursts occur at randomized
intervals. It takes several minutes for the load pattern to emerge. Let the generator run for at least three or four
minutes before you monitor the load.
1. In the PowerShell ISE, open the ...\Learning Modules\Utilities\Demo-LoadGenerator.ps1 script.
2. Press F5 to run the script and start the load generator. Leave the default parameter values for now.
3. Sign in to your Azure account, and select the subscription you want to use, if necessary.
The load generator script starts a background job for each database in the catalog and then stops. If you rerun
the load generator script, it stops any background jobs that are running before it starts new ones.
Monitor the background jobs
If you want to control and monitor the background jobs, use the following cmdlets:
Get-Job
Receive-Job
Stop-Job

Demo -LoadGenerator.ps1 actions


Demo-LoadGenerator.ps1 mimics an active workload of customer transactions. The following steps describe the
sequence of actions that Demo-LoadGenerator.ps1 initiates:
1. Demo-LoadGenerator.ps1 starts LoadGenerator.ps1 in the foreground.
Both .ps1 files are stored under the folders Learning Modules\Utilities\.
2. LoadGenerator.ps1 loops through all tenant databases in the catalog.
3. LoadGenerator.ps1 starts a background PowerShell job for each tenant database:
By default, the background jobs run for 120 minutes.
Each job causes a CPU-based load on one tenant database by executing sp_CpuLoadGenerator. The
intensity and duration of the load varies depending on $DemoScenario .
sp_CpuLoadGenerator loops around a SQL SELECT statement that causes a high CPU load. The time
interval between issues of the SELECT varies according to parameter values to create a controllable
CPU load. Load levels and intervals are randomized to simulate more realistic loads.
This .sql file is stored under WingtipTenantDB\dbo\StoredProcedures\.
4. If $OneTime = $false , the load generator starts the background jobs and then continues to run. Every 10
seconds, it monitors for any new tenants that are provisioned. If you set $OneTime = $true , the
LoadGenerator starts the background jobs and then stops running in the foreground. For this tutorial,
leave $OneTime = $false .
Use Ctrl-C or Stop Operation Ctrl-Break if you want to stop or restart the load generator.
If you leave the load generator running in the foreground, use another PowerShell ISE instance to run
other PowerShell scripts.

Before you continue with the next section, leave the load generator running in the job-invoking state.

Provision a new tenant


The initial deployment creates three sample tenants. Now you create another tenant to see the impact on the
deployed application. In the Wingtip app, the workflow to provision new tenants is explained in the Provision
and catalog tutorial. In this phase, you create a new tenant, which takes less than one minute.
1. Open a new PowerShell ISE.
2. Open ...\Learning Modules\Provision and Catalog\Demo-ProvisionAndCatalog.ps1.
3. To run the script, press F5. Leave the default values for now.

NOTE
Many Wingtip SaaS scripts use $PSScriptRoot to browse folders to call functions in other scripts. This variable is
evaluated only when the full script is executed by pressing F5. Highlighting and running a selection with F8 can
result in errors. To run the scripts, press F5.

The new tenant database is:


Created in an SQL elastic pool.
Initialized.
Registered in the catalog.
After successful provisioning, the Events site of the new tenant appears in your browser.
Refresh the Events Hub to make the new tenant appear in the list.

Explore the servers, pools, and tenant databases


Now that you've started running a load against the collection of tenants, let's look at some of the resources that
were deployed.
1. In the Azure portal, browse to your list of SQL servers. Then open the catalog-dpt-<USER> server.
The catalog server contains two databases, tenantcatalog and basetenantdb (a template database
that's copied to create new tenants).
2. Go back to your list of SQL servers.
3. Open the tenants1-dpt-<USER> server that holds the tenant databases.
4. See the following items:
Each tenant database is an Elastic Standard database in a 50-eDTU standard pool.
The Red Maple Racing database is the tenant database you provisioned previously.
Monitor the pool
After LoadGenerator.ps1 runs for several minutes, enough data should be available to start looking at some
monitoring capabilities. These capabilities are built into pools and databases.
Browse to the server tenants1-dpt-<user> , and select Pool1 to view resource utilization for the pool. In the
following charts, the load generator ran for one hour.
The first chart, labeled Resource utilization , shows pool eDTU utilization.
The second chart shows eDTU utilization of the five most active databases in the pool.
The two charts illustrate that elastic pools and SQL Database are well suited to unpredictable SaaS application
workloads. The charts show that four databases are each bursting to as much as 40 eDTUs, and yet all the
databases are comfortably supported by a 50-eDTU pool. The 50-eDTU pool can support even heavier
workloads. If the databases are provisioned as single databases, each one needs to be an S2 (50 DTU) to support
the bursts. The cost of four single S2 databases is nearly three times the price of the pool. In real-world
situations, SQL Database customers run up to 500 databases in 200 eDTU pools. For more information, see the
Performance monitoring tutorial.

Additional resources
For more information, see additional tutorials that build on the Wingtip Tickets SaaS database-per-tenant
application.
To learn about elastic pools, see What is an Azure SQL elastic pool?.
To learn about elastic jobs, see Manage scaled-out cloud databases.
To learn about multitenant SaaS applications, see Design patterns for multitenant SaaS applications.

Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS application.
About the servers, pools, and databases that make up the app.
How tenants are mapped to their data with the catalog.
How to provision new tenants.
How to view pool utilization to monitor tenant activity.
How to delete sample resources to stop related billing.
Next, try the Provision and catalog tutorial.
Learn how to provision new tenants and register
them in the catalog
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you learn how to provision and catalog SaaS patterns. You also learn how they're implemented in
the Wingtip Tickets SaaS database-per-tenant application. You create and initialize new tenant databases and
register them in the application's tenant catalog. The catalog is a database that maintains the mapping between
the SaaS application's many tenants and their data. The catalog plays an important role in directing application
and management requests to the correct database.
In this tutorial, you learn how to:
Provision a single new tenant.
Provision a batch of additional tenants.
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS database-per-tenant app is deployed. To deploy it in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS database-per-tenant application.
Azure PowerShell is installed. For more information, see Get started with Azure PowerShell.

Introduction to the SaaS catalog pattern


In a database-backed multitenant SaaS application, it's important to know where information for each tenant is
stored. In the SaaS catalog pattern, a catalog database is used to hold the mapping between each tenant and the
database in which their data is stored. This pattern applies whenever tenant data is distributed across multiple
databases.
Each tenant is identified by a key in the catalog, which is mapped to the location of their database. In the Wingtip
Tickets app, the key is formed from a hash of the tenant's name. This scheme allows the app to construct the key
from the tenant name included in the application URL. Other tenant key schemes can be used.
The catalog allows the name or location of the database to be changed with minimal impact on the application.
In a multitenant database model, this capability also accommodates moving a tenant between databases. The
catalog also can be used to indicate whether a tenant or database is offline for maintenance or other actions.
This capability is explored in the Restore single tenant tutorial.
The catalog also can store additional tenant or database metadata, such as the schema version, service plan, or
SLAs offered to tenants. The catalog can store other information that enables application management,
customer support, or DevOps.
Beyond the SaaS application, the catalog can enable database tools. In the Wingtip Tickets SaaS database-per-
tenant sample, the catalog is used to enable cross-tenant query, which is explored in the Ad hoc reporting
tutorial. Cross-database job management is explored in the Schema management and Tenant analytics tutorials.
In the Wingtip Tickets SaaS samples, the catalog is implemented by using the Shard Management features of
the Elastic Database client library (EDCL). The EDCL is available in Java and the .NET Framework. The EDCL
enables an application to create, manage, and use a database-backed shard map.
A shard map contains a list of shards (databases) and the mapping between keys (tenants) and shards. EDCL
functions are used during tenant provisioning to create the entries in the shard map. They're used at run time by
applications to connect to the correct database. EDCL caches connection information to minimize traffic to the
catalog database and speed up the application.

IMPORTANT
The mapping data is accessible in the catalog database, but don't edit it. Edit mapping data by using Elastic Database
Client Library APIs only. Directly manipulating the mapping data risks corrupting the catalog and isn't supported.

Introduction to the SaaS provisioning pattern


When you add a new tenant in a SaaS application that uses a single-tenant database model, you must provision
a new tenant database. The database must be created in the appropriate location and service tier. It also must be
initialized with the appropriate schema and reference data. And it must be registered in the catalog under the
appropriate tenant key.
Different approaches to database provisioning can be used. You can execute SQL scripts, deploy a bacpac, or
copy a template database.
Database provisioning needs to be part of your schema management strategy. You must make sure that new
databases are provisioned with the latest schema. This requirement is explored in the Schema management
tutorial.
The Wingtip Tickets database-per-tenant app provisions new tenants by copying a template database named
basetenantdb, which is deployed on the catalog server. Provisioning can be integrated into the application as
part of a sign-up experience. It also can be supported offline by using scripts. This tutorial explores provisioning
by using PowerShell.
Provisioning scripts copy the basetenantdb database to create a new tenant database in an elastic pool. The
tenant database is created in the tenant server mapped to the newtenant DNS alias. This alias maintains a
reference to the server used to provision new tenants and is updated to point to a recovery tenant server in the
disaster recovery tutorials (DR using georestore, DR using georeplication). The scripts then initialize the
database with tenant-specific information and register it in the catalog shard map. Tenant databases are given
names based on the tenant name. This naming scheme isn't a critical part of the pattern. The catalog maps the
tenant key to the database name, so any naming convention can be used.

Get the Wingtip Tickets SaaS database-per-tenant application scripts


The Wingtip Tickets SaaS scripts and application source code are available in the WingtipTicketsSaaS-
DbPerTenant GitHub repo. Check out the general guidance for steps to download and unblock the Wingtip
Tickets SaaS scripts.

Provision and catalog detailed walkthrough


To understand how the Wingtip Tickets application implements new tenant provisioning, add a breakpoint and
follow the workflow while you provision a tenant.
1. In the PowerShell ISE, open ...\Learning Modules\ProvisionAndCatalog\Demo-ProvisionAndCatalog.ps1
and set the following parameters:
$TenantName = the name of the new venue (for example, Bushwillow Blues).
$VenueType = one of the predefined venue types: blues, classicalmusic, dance, jazz, judo, motor
racing, multipurpose, opera, rockmusic, soccer.
$DemoScenario = 1 , Provision a single tenant.
2. To add a breakpoint, put your cursor anywhere on the line that says New-Tenant `. Then press F9.

3. To run the script, press F5.


4. After the script execution stops at the breakpoint, press F11 to step into the code.

Trace the script's execution by using the Debug menu options. Press F10 and F11 to step over or into the called
functions. For more information about debugging PowerShell scripts, see Tips on working with and debugging
PowerShell scripts.
You don't need to explicitly follow this workflow. It explains how to debug the script.
Impor t the CatalogAndDatabaseManagement.psm1 module. It provides a catalog and tenant-level
abstraction over the Shard Management functions. This module encapsulates much of the catalog pattern
and is worth exploring.
Impor t the SubscriptionManagement.psm1 module. It contains functions for signing in to Azure
and selecting the Azure subscription you want to work with.
Get configuration details. Step into Get-Configuration by using F11, and see how the app config is
specified. Resource names and other app-specific values are defined here. Don't change these values until
you are familiar with the scripts.
Get the catalog object. Step into Get-Catalog, which composes and returns a catalog object that's used
in the higher-level script. This function uses Shard Management functions that are imported from
AzureShardManagement.psm1 . The catalog object is composed of the following elements:
$catalogServerFullyQualifiedName is constructed by using the standard stem plus your user name:
catalog-<user>.database.windows .net.
$catalogDatabaseName is retrieved from the config: tenantcatalog.
$shardMapManager object is initialized from the catalog database.
$shardMap object is initialized from the tenantcatalog shard map in the catalog database. A catalog
object is composed and returned. It's used in the higher-level script.
Calculate the new tenant key. A hash function is used to create the tenant key from the tenant name.
Check if the tenant key already exists. The catalog is checked to make sure the key is available.
The tenant database is provisioned with New-TenantDatabase. Use F11 to step into how the
database is provisioned by using an Azure Resource Manager template.
The database name is constructed from the tenant name to make it clear which shard belongs to which
tenant. You also can use other database naming conventions. A Resource Manager template creates a
tenant database by copying a template database (baseTenantDB) on the catalog server. As an alternative,
you can create a database and initialize it by importing a bacpac. Or you can execute an initialization
script from a well-known location.
The Resource Manager template is in the …\Learning Modules\Common\ folder:
tenantdatabasecopytemplate.json
The tenant database is fur ther initialized. The venue (tenant) name and the venue type are added.
You also can do other initialization here.
The tenant database is registered in the catalog. It's registered with Add-TenantDatabaseToCatalog
by using the tenant key. Use F11 to step into the details:
The catalog database is added to the shard map (the list of known databases).
The mapping that links the key value to the shard is created.
Additional metadata about the tenant (the venue's name) is added to the Tenants table in the catalog.
The Tenants table isn't part of the Shard Management schema, and it isn't installed by the EDCL. This
table illustrates how the catalog database can be extended to support additional application-specific
data.
After provisioning completes, execution returns to the original Demo-ProvisionAndCatalog script. The Events
page opens for the new tenant in the browser.
Provision a batch of tenants
This exercise provisions a batch of 17 tenants. We recommend that you provision this batch of tenants before
starting other Wingtip Tickets SaaS database-per-tenant tutorials. There are more than just a few databases to
work with.
1. In the PowerShell ISE, open ...\Learning Modules\ProvisionAndCatalog\Demo-ProvisionAndCatalog.ps1.
Change the $DemoScenario parameter to 3:
$DemoScenario = 3 , Provision a batch of tenants.
2. To run the script, press F5.
The script deploys a batch of additional tenants. It uses an Azure Resource Manager template that controls the
batch and delegates provisioning of each database to a linked template. Using templates in this way allows
Azure Resource Manager to broker the provisioning process for your script. The templates provision databases
in parallel and handle retries, if needed. The script is idempotent, so if it fails or stops for any reason, run it again.
Verify the batch of tenants that successfully deployed
In the Azure portal, browse to your list of servers and open the tenants1 server. Select SQL databases ,
and verify that the batch of 17 additional databases is now in the list.
Other provisioning patterns
Other provisioning patterns not included in this tutorial:
Pre-provisioning databases : The pre-provisioning pattern exploits the fact that databases in an elastic pool
don't add extra cost. Billing is for the elastic pool, not the databases. Idle databases consume no resources. By
pre-provisioning databases in a pool and allocating them when needed, you can reduce the time to add tenants.
The number of databases pre-provisioned can be adjusted as needed to keep a buffer suitable for the
anticipated provisioning rate.
Auto-provisioning : In the auto-provisioning pattern, a provisioning service provisions servers, pools, and
databases automatically, as needed. If you want, you can include pre-provisioning databases in elastic pools. If
databases are decommissioned and deleted, gaps in elastic pools can be filled by the provisioning service. Such
a service can be simple or complex, such as handling provisioning across multiple geographies and setting up
geo-replication for disaster recovery.
With the auto-provisioning pattern, a client application or script submits a provisioning request to a queue to be
processed by the provisioning service. It then polls the service to determine completion. If pre-provisioning is
used, requests are handled quickly. The service provisions a replacement database in the background.

Next steps
In this tutorial you learned how to:
Provision a single new tenant.
Provision a batch of additional tenants.
Step into the details of provisioning tenants and registering them into the catalog.
Try the Performance monitoring tutorial.
Additional resources
Additional tutorials that build on the Wingtip Tickets SaaS database-per-tenant application
Elastic database client library
Debug scripts in the Windows PowerShell ISE
Monitor and manage performance of Azure SQL
Database in a multi-tenant SaaS app
7/12/2022 • 15 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, several key performance management scenarios used in SaaS applications are explored. Using a
load generator to simulate activity across all tenant databases, the built-in monitoring and alerting features of
SQL Database and elastic pools are demonstrated.
The Wingtip Tickets SaaS Database Per Tenant app uses a single-tenant data model, where each venue (tenant)
has their own database. Like many SaaS applications, the anticipated tenant workload pattern is unpredictable
and sporadic. In other words, ticket sales may occur at any time. To take advantage of this typical database usage
pattern, tenant databases are deployed into elastic pools. Elastic pools optimize the cost of a solution by sharing
resources across many databases. With this type of pattern, it's important to monitor database and pool
resource usage to ensure that loads are reasonably balanced across pools. You also need to ensure that
individual databases have adequate resources, and that pools are not hitting their eDTU limits. This tutorial
explores ways to monitor and manage databases and pools, and how to take corrective action in response to
variations in workload.
In this tutorial you learn how to:
Simulate usage on the tenant databases by running a provided load generator
Monitor the tenant databases as they respond to the increase in load
Scale up the Elastic pool in response to the increased database load
Provision a second Elastic pool to load balance database activity
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS Database Per Tenant app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Database Per Tenant application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell

Introduction to SaaS performance management patterns


Managing database performance consists of compiling and analyzing performance data, and then reacting to
this data by adjusting parameters to maintain an acceptable response time for your application. When hosting
multiple tenants, Elastic pools are a cost-effective way to provide and manage resources for a group of
databases with unpredictable workloads. With certain workload patterns, as few as two S3 databases can benefit
from being managed in a pool.
Pools, and the databases in pools, should be monitored to ensure they stay within acceptable ranges of
performance. Tune the pool configuration to meet the needs of the aggregate workload of all databases,
ensuring that the pool eDTUs are appropriate for the overall workload. Adjust the per-database min and per-
database max eDTU values to appropriate values for your specific application requirements.
Performance management strategies
To avoid having to manually monitor performance, it’s most effective to set aler ts that trigger when
databases or pools stray out of normal ranges .
To respond to short-term fluctuations in the aggregate compute size of a pool, the pool eDTU level can be
scaled up or down . If this fluctuation occurs on a regular or predictable basis, scaling the pool can be
scheduled to occur automatically . For example, scale down when you know your workload is light,
maybe overnight, or during weekends.
To respond to longer-term fluctuations, or changes in the number of databases, individual databases can
be moved into other pools .
To respond to short-term increases in individual database load individual databases can be taken out of
a pool and assigned an individual compute size . Once the load is reduced, the database can then be
returned to the pool. When this is known in advance, databases can be moved preemptively to ensure the
database always has the resources it needs, and to avoid impact on other databases in the pool. If this
requirement is predictable, such as a venue experiencing a rush of ticket sales for a popular event, then this
management behavior can be integrated into the application.
The Azure portal provides built-in monitoring and alerting on most resources. Monitoring and alerting is
available on databases and pools. This built-in monitoring and alerting is resource-specific, so it's convenient to
use for small numbers of resources, but is not very convenient when working with many resources.
For high-volume scenarios, where you're working with many resources, Azure Monitor logs can be used. This is
a separate Azure service that provides analytics over emitted logs gathered in a Log Analytics workspace. Azure
Monitor logs can collect telemetry from many services and be used to query and set alerts.

Get the Wingtip Tickets SaaS Database Per Tenant application scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.

Provision additional tenants


While pools can be cost-effective with just two S3 databases, the more databases that are in the pool the more
cost-effective the averaging effect becomes. For a good understanding of how performance monitoring and
management works at scale, this tutorial requires you have at least 20 databases deployed.
If you already provisioned a batch of tenants in a prior tutorial, skip to the Simulate usage on all tenant
databases section.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 1 , Provision a batch of tenants
3. Press F5 to run the script.
The script will deploy 17 tenants in less than five minutes.
The New-TenantBatch script uses a nested or linked set of Resource Manager templates that create a batch of
tenants, which by default copies the database basetenantdb on the catalog server to create the new tenant
databases, then registers these in the catalog, and finally initializes them with the tenant name and venue type.
This is consistent with the way the app provisions a new tenant. Any changes made to basetenantdb are applied
to any new tenants provisioned thereafter. See the Schema Management tutorial to see how to make schema
changes to existing tenant databases (including the basetenantdb database).

Simulate usage on all tenant databases


The Demo-PerformanceMonitoringAndManagement.ps1 script is provided that simulates a workload running
against all tenant databases. The load is generated using one of the available load scenarios:

DEM O SC EN A RIO

2 Generate normal intensity load (approximately 40 DTU)

3 Generate load with longer and more frequent bursts per


database

4 Generate load with higher DTU bursts per database


(approximately 80 DTU)

5 Generate a normal load plus a high load on a single tenant


(approximately 95 DTU)

6 Generate unbalanced load across multiple pools

The load generator applies a synthetic CPU-only load to every tenant database. The generator starts a job for
each tenant database, which calls a stored procedure periodically that generates the load. The load levels (in
eDTUs), duration, and intervals are varied across all databases, simulating unpredictable tenant activity.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 2 , Generate normal intensity load.
3. Press F5 to apply a load to all your tenant databases.
Wingtip Tickets SaaS Database Per Tenant is a SaaS app, and the real-world load on a SaaS app is typically
sporadic and unpredictable. To simulate this, the load generator produces a randomized load distributed across
all tenants. Several minutes are needed for the load pattern to emerge, so run the load generator for 3-5
minutes before attempting to monitor the load in the following sections.
IMPORTANT
The load generator is running as a series of jobs in your local PowerShell session. Keep the Demo-
PerformanceMonitoringAndManagement.ps1 tab open! If you close the tab, or suspend your machine, the load generator
stops. The load generator remains in a job-invoking state where it generates load on any new tenants that are
provisioned after the generator is started. Use Ctrl-C to stop invoking new jobs and exit the script. The load generator will
continue to run, but only on existing tenants.

Monitor resource usage using the Azure portal


To monitor the resource usage that results from the load being applied, open the portal to the pool containing
the tenant databases:
1. Open the Azure portal and browse to the tenants1-dpt-<USER> server.
2. Scroll down and locate elastic pools and click Pool1 . This pool contains all the tenant databases created so
far.
Observe the Elastic pool monitoring and Elastic database monitoring charts.
The pool's resource utilization is the aggregate database utilization for all databases in the pool. The database
chart shows the five hottest databases:

Because there are additional databases in the pool beyond the top five, the pool utilization shows activity that is
not reflected in the top five databases chart. For additional details, click Database Resource Utilization :

Set performance alerts on the pool


Set an alert on the pool that triggers on >75% utilization as follows:
1. Open Pool1 (on the tenants1-dpt-<user> server) in the Azure portal.
2. Click Aler t Rules , and then click + Add aler t :
3. Provide a name, such as High DTU ,
4. Set the following values:
Metric = eDTU percentage
Condition = greater than
Threshold = 75
Period = Over the last 30 minutes
5. Add an email address to the Additional administrator email(s) box and click OK .
Scale up a busy pool
If the aggregate load level increases on a pool to the point that it maxes out the pool and reaches 100% eDTU
usage, then individual database performance is affected, potentially slowing query response times for all
databases in the pool.
Shor t-term , consider scaling up the pool to provide additional resources, or removing databases from the pool
(moving them to other pools, or out of the pool to a stand-alone service tier).
Longer term , consider optimizing queries or index usage to improve database performance. Depending on the
application's sensitivity to performance issues its best practice to scale a pool up before it reaches 100% eDTU
usage. Use an alert to warn you in advance.
You can simulate a busy pool by increasing the load produced by the generator. Causing the databases to burst
more frequently, and for longer, increasing the aggregate load on the pool without changing the requirements
of the individual databases. Scaling up the pool is easily done in the portal or from PowerShell. This exercise
uses the portal.
1. Set $DemoScenario = 3 , Generate load with longer and more frequent bursts per database to increase
the intensity of the aggregate load on the pool without changing the peak load required by each
database.
2. Press F5 to apply a load to all your tenant databases.
3. Go to Pool1 in the Azure portal.
Monitor the increased pool eDTU usage on the upper chart. It takes a few minutes for the new higher load to
kick in, but you should quickly see the pool start to hit max utilization, and as the load steadies into the new
pattern, it rapidly overloads the pool.
1. To scale up the pool, click Configure pool at the top of the Pool1 page.
2. Adjust the Pool eDTU setting to 100 . Changing the pool eDTU does not change the per-database settings
(which is still 50 eDTU max per database). You can see the per-database settings on the right side of the
Configure pool page.
3. Click Save to submit the request to scale the pool.
Go back to Pool1 > Over view to view the monitoring charts. Monitor the effect of providing the pool with
more resources (although with few databases and a randomized load it’s not always easy to see conclusively
until you run for some time). While you are looking at the charts bear in mind that 100% on the upper chart
now represents 100 eDTUs, while on the lower chart 100% is still 50 eDTUs as the per-database max is still 50
eDTUs.
Databases remain online and fully available throughout the process. At the last moment as each database is
ready to be enabled with the new pool eDTU, any active connections are broken. Application code should always
be written to retry dropped connections, and so will reconnect to the database in the scaled-up pool.

Load balance between pools


As an alternative to scaling up the pool, create a second pool and move databases into it to balance the load
between the two pools. To do this the new pool must be created on the same server as the first.
1. In the Azure portal, open the tenants1-dpt-<USER> server.
2. Click + New pool to create a pool on the current server.
3. On the Elastic pool template:
a. Set Name to Pool2.
b. Leave the pricing tier as Standard Pool .
c. Click Configure pool ,
d. Set Pool eDTU to 50 eDTU.
e. Click Add databases to see a list of databases on the server that can be added to Pool2.
f. Select any 10 databases to move these to the new pool, and then click Select . If you've been
running the load generator, the service already knows that your performance profile requires a
larger pool than the default 50 eDTU size and recommends starting with a 100 eDTU setting.
g. For this tutorial, leave the default at 50 eDTUs, and click Select again.
h. Select OK to create the new pool and to move the selected databases into it.
Creating the pool and moving the databases takes a few minutes. As databases are moved they remain online
and fully accessible until the very last moment, at which point any open connections are closed. As long as you
have some retry logic, clients will then connect to the database in the new pool.
Browse to Pool2 (on the tenants1-dpt-<user> server) to open the pool and monitor its performance. If you
don't see it, wait for provisioning of the new pool to complete.
You now see that resource usage on Pool1 has dropped and that Pool2 is now similarly loaded.

Manage performance of an individual database


If an individual database in a pool experiences a sustained high load, depending on the pool configuration, it
may tend to dominate the resources in the pool and impact other databases. If the activity is likely to continue
for some time, the database can be temporarily moved out of the pool. This allows the database to have the
extra resources it needs, and isolates it from the other databases.
This exercise simulates the effect of Contoso Concert Hall experiencing a high load when tickets go on sale for a
popular concert.
1. In the PowerShell ISE , open the …\Demo-PerformanceMonitoringAndManagement.ps1 script.
2. Set $DemoScenario = 5, Generate a normal load plus a high load on a single tenant
(approximately 95 DTU).
3. Set $SingleTenantDatabaseName = contosoconcer thall
4. Execute the script using F5 .
5. In the Azure portal, browse to the list of databases on the tenants1-dpt-<user> server.
6. Click on the contosoconcer thall database.
7. Click on the pool that contosoconcer thall is in. Locate the pool in the Elastic pool section.
8. Inspect the Elastic pool monitoring chart and look for the increased pool eDTU usage. After a minute
or two, the higher load should start to kick in, and you should quickly see that the pool hits 100%
utilization.
9. Inspect the Elastic database monitoring display, which shows the hottest databases in the past hour.
The contosoconcerthall database should soon appear as one of the five hottest databases.
10. Click on the Elastic database monitoring char t and it opens the Database Resource Utilization
page where you can monitor any of the databases. This lets you isolate the display for the
contosoconcerthall database.
11. From the list of databases, click contosoconcer thall .
12. Click Pricing Tier (scale DTUs) to open the Configure performance page where you can set a stand-
alone compute size for the database.
13. Click on the Standard tab to open the scale options in the Standard tier.
14. Slide the DTU slider to right to select 100 DTUs. Note this corresponds to the service objective, S3 .
15. Click Apply to move the database out of the pool and make it a Standard S3 database.
16. Once scaling is complete, monitor the effect on the contosoconcerthall database and Pool1 on the elastic
pool and database blades.
Once the high load on the contosoconcerthall database subsides you should promptly return it to the pool to
reduce its cost. If it’s unclear when that will happen you could set an alert on the database that will trigger when
its DTU usage drops below the per-database max on the pool. Moving a database into a pool is described in
exercise 5.

Other performance management patterns


Pre-emptive scaling In the exercise above where you explored how to scale an isolated database, you knew
which database to look for. If the management of Contoso Concert Hall had informed Wingtips of the
impending ticket sale, the database could have been moved out of the pool preemptively. Otherwise, it would
likely have required an alert on the pool or the database to spot what was happening. You wouldn’t want to
learn about this from the other tenants in the pool complaining of degraded performance. And if the tenant can
predict how long they will need additional resources you can set up an Azure Automation runbook to move the
database out of the pool and then back in again on a defined schedule.
Tenant self-ser vice scaling Because scaling is a task easily called via the management API, you can easily
build the ability to scale tenant databases into your tenant-facing application, and offer it as a feature of your
SaaS service. For example, let tenants self-administer scaling up and down, perhaps linked directly to their
billing!
Scaling a pool up and down on a schedule to match usage patterns
Where aggregate tenant usage follows predictable usage patterns, you can use Azure Automation to scale a
pool up and down on a schedule. For example, scale a pool down after 6pm and up again before 6am on
weekdays when you know there is a drop in resource requirements.

Next steps
In this tutorial you learn how to:
Simulate usage on the tenant databases by running a provided load generator
Monitor the tenant databases as they respond to the increase in load
Scale up the Elastic pool in response to the increased database load
Provision a second Elastic pool to load balance the database activity
Restore a single tenant tutorial

Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application deployment
SQL Elastic pools
Azure automation
Azure Monitor logs - Setting up and using Azure Monitor logs tutorial
Set up and use Azure Monitor logs with a
multitenant Azure SQL Database SaaS app
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you set up and use Azure Monitor logs to monitor elastic pools and databases. This tutorial builds
on the Performance monitoring and management tutorial. It shows how to use Azure Monitor logs to augment
the monitoring and alerting provided in the Azure portal. Azure Monitor logs supports monitoring thousands of
elastic pools and hundreds of thousands of databases. Azure Monitor logs provides a single monitoring
solution, which can integrate monitoring of different applications and Azure services across multiple Azure
subscriptions.

NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a
Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the
terminology to better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.

In this tutorial you learn how to:


Install and configure Azure Monitor logs.
Use Azure Monitor logs to monitor pools and databases.
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS database-per-tenant app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS database-per-tenant application.
Azure PowerShell is installed. For more information, see Get started with Azure PowerShell.
See the Performance monitoring and management tutorial for a discussion of SaaS scenarios and patterns and
how they affect the requirements on a monitoring solution.

Monitor and manage database and elastic pool performance with


Azure Monitor logs
For Azure SQL Database, monitoring and alerting is available on databases and pools in the Azure portal. This
built-in monitoring and alerting is convenient, but it's also resource-specific. That means it's less well suited to
monitor large installations or provide a unified view across resources and subscriptions.
For high-volume scenarios, you can use Azure Monitor logs for monitoring and alerting. Azure Monitor is a
separate Azure service that enables analytics over logs gathered in a workspace from potentially many services.
Azure Monitor logs provides a built-in query language and data visualization tools that allow operational data
analytics. The SQL Analytics solution provides several predefined elastic pool and database monitoring and
alerting views and queries. Azure Monitor logs also provides a custom view designer.
OMS workspaces are now referred to as Log Analytics workspaces. Log Analytics workspaces and analytics
solutions open in the Azure portal. The Azure portal is the newer access point, but it might be what's behind the
Operations Management Suite portal in some areas.
Create performance diagnostic data by simulating a workload on your tenants
1. In the PowerShell ISE, open ..\WingtipTicketsSaaS-MultiTenantDb-master\Learning Modules\Performance
Monitoring and Management\Demo-PerformanceMonitoringAndManagement.ps1. Keep this script open
because you might want to run several of the load generation scenarios during this tutorial.
2. If you haven't done so already, provision a batch of tenants to make the monitoring context more
interesting. This process takes a few minutes.
a. Set $DemoScenario = 1 , Provision a batch of tenants.
b. To run the script and deploy an additional 17 tenants, press F5.
3. Now start the load generator to run a simulated load on all the tenants.
a. Set $DemoScenario = 2 , Generate normal intensity load (approximately 30 DTU).
b. To run the script, press F5.

Get the Wingtip Tickets SaaS database-per-tenant application scripts


The Wingtip Tickets SaaS multitenant database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. For steps to download and unblock the Wingtip Tickets
PowerShell scripts, see the general guidance.

Install and configure Log Analytics workspace and the Azure SQL
Analytics solution
Azure Monitor is a separate service that must be configured. Azure Monitor logs collects log data, telemetry, and
metrics in a Log Analytics workspace. Just like other resources in Azure, a Log Analytics workspace must be
created. The workspace doesn't need to be created in the same resource group as the applications it monitors.
Doing so often makes the most sense though. For the Wingtip Tickets app, use a single resource group to make
sure the workspace is deleted with the application.
1. In the PowerShell ISE, open ..\WingtipTicketsSaaS-MultiTenantDb-master\Learning Modules\Performance
Monitoring and Management\Log Analytics\Demo-LogAnalytics.ps1.
2. To run the script, press F5.
Now you can open Azure Monitor logs in the Azure portal. It takes a few minutes to collect telemetry in the Log
Analytics workspace and to make it visible. The longer you leave the system gathering diagnostic data, the more
interesting the experience is.

Use Log Analytics workspace and the SQL Analytics solution to


monitor pools and databases
In this exercise, open Log Analytics workspace in the Azure portal to look at the telemetry gathered for the
databases and pools.
1. Browse to the Azure portal. Select All ser vices to open Log Analytics workspace. Then, search for Log
Analytics.
2. Select the workspace named wtploganalytics-<user>.
3. Select Over view to open the log analytics solution in the Azure portal.

IMPORTANT
It might take a couple of minutes before the solution is active.

4. Select the Azure SQL Analytics tile to open it.


5. The views in the solution scroll sideways, with their own inner scroll bar at the bottom. Refresh the page if
necessary.
6. To explore the summary page, select the tiles or individual databases to open a drill-down explorer.

7. Change the filter setting to modify the time range. For this tutorial, select Last 1 hour .
8. Select an individual database to explore the query usage and metrics for that database.

9. To see usage metrics, scroll the analytics page to the right.


10. Scroll the analytics page to the left, and select the server tile in the Resource Info list.

A page opens that shows the pools and databases on the server.
11. Select a pool. On the pool page that opens, scroll to the right to see the pool metrics.

12. Back in the Log Analytics workspace, select OMS Por tal to open the workspace there.
In the Log Analytics workspace, you can explore the log and metric data further.
Monitoring and alerting in Azure Monitor logs are based on queries over the data in the workspace, unlike the
alerting defined on each resource in the Azure portal. By basing alerts on queries, you can define a single alert
that looks over all databases, rather than defining one per database. Queries are limited only by the data
available in the workspace.
For more information on how to use Azure Monitor logs to query and set alerts, see Work with alert rules in
Azure Monitor logs.
Azure Monitor logs for SQL Database charges based on the data volume in the workspace. In this tutorial, you
created a free workspace, which is limited to 500 MB per day. After that limit is reached, data is no longer added
to the workspace.

Next steps
In this tutorial you learned how to:
Install and configure Azure Monitor logs.
Use Azure Monitor logs to monitor pools and databases.
Try the Tenant analytics tutorial.

Additional resources
Additional tutorials that build on the initial Wingtip Tickets SaaS database-per-tenant application deployment
Azure Monitor logs
Restore a single tenant with a database-per-tenant
SaaS application
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


The database-per-tenant model makes it easy to restore a single tenant to a prior point in time without affecting
other tenants.
In this tutorial, you learn two data recovery patterns:
Restore a database into a parallel database (side by side).
Restore a database in place, replacing the existing database.

PAT T ERN DESC RIP T IO N

Restore into a parallel database This pattern can be used for tasks such as review, auditing,
and compliance to allow a tenant to inspect their data from
an earlier point. The tenant's current database remains
online and unchanged.

Restore in place This pattern is typically used to recover a tenant to an earlier


point, after a tenant accidentally deletes or corrupts data.
The original database is taken off line and replaced with the
restored database.

To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip SaaS app is deployed. To deploy in less than five minutes, see Deploy and explore the Wingtip
SaaS application.
Azure PowerShell is installed. For details, see Get started with Azure PowerShell.

Introduction to the SaaS tenant restore patterns


There are two simple patterns for restoring an individual tenant's data. Because tenant databases are isolated
from each other, restoring one tenant has no impact on any other tenant's data. The Azure SQL Database point-
in-time-restore (PITR) feature is used in both patterns. PITR always creates a new database.
Restore in parallel : In the first pattern, a new parallel database is created alongside the tenant's current
database. The tenant is then given read-only access to the restored database. The restored data can be
reviewed and potentially used to overwrite current data values. It's up to the app designer to determine
how the tenant accesses the restored database and what options for recovery are provided. Simply
allowing the tenant to review their data at an earlier point might be all that's required in some scenarios.
Restore in place : The second pattern is useful if data was lost or corrupted and the tenant wants to
revert to an earlier point. The tenant is taken off line while the database is restored. The original database
is deleted, and the restored database is renamed. The backup chain of the original database remains
accessible after the deletion, so you can restore the database to an earlier point in time, if necessary.
If the database uses active geo-replication and restoring in parallel, we recommend that you copy any required
data from the restored copy into the original database. If you replace the original database with the restored
database, you need to reconfigure and resynchronize geo-replication.
Get the Wingtip Tickets SaaS database-per-tenant application scripts
The Wingtip Tickets SaaS Multitenant Database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. For steps to download and unblock the Wingtip Tickets SaaS
scripts, see the general guidance.

Before you start


When a database is created, it can take 10 to 15 minutes before the first full backup is available to restore from.
If you just installed the application, you might need to wait for a few minutes before you try this scenario.

Simulate a tenant accidentally deleting data


To demonstrate these recovery scenarios, first "accidentally" delete an event in one of the tenant databases.
Open the Events app to review the current events
1. Open the Events Hub (http://events.wtp.<user>.trafficmanager.net), and select Contoso Concer t Hall .

2. Scroll the list of events, and make a note of the last event in the list.
"Accidentally" delete the last event
1. In the PowerShell ISE, open ...\Learning Modules\Business Continuity and Disaster
Recovery\RestoreTenant\Demo-RestoreTenant.ps1, and set the following value:
$DemoScenario = 1 , Delete last event (with no ticket sales).
2. Press F5 to run the script and delete the last event. The following confirmation message appears:

Deleting last unsold event from Contoso Concert Hall ...


Deleted event 'Seriously Strauss' from Contoso Concert Hall venue.

3. The Contoso events page opens. Scroll down and verify that the event is gone. If the event is still in the
list, select Refresh and verify that it's gone.
Restore a tenant database in parallel with the production database
This exercise restores the Contoso Concert Hall database to a point in time before the event was deleted. This
scenario assumes that you want to review the deleted data in a parallel database.
The Restore-TenantInParallel.ps1 script creates a parallel tenant database named ContosoConcertHall_old, with a
parallel catalog entry. This pattern of restore is best suited for recovering from a minor data loss. You also can
use this pattern if you need to review data for compliance or auditing purposes. It's the recommended approach
when you use active geo-replication.
1. Complete the Simulate a tenant accidentally deleting data section.
2. In the PowerShell ISE, open ...\Learning Modules\Business Continuity and Disaster
Recovery\RestoreTenant\Demo-RestoreTenant.ps1.
3. Set $DemoScenario = 2 , Restore tenant in parallel.
4. To run the script, press F5.
The script restores the tenant database to a point in time before you deleted the event. The database is restored
to a new database named ContosoConcertHall_old. The catalog metadata that exists in this restored database is
deleted, and then the database is added to the catalog by using a key constructed from the
ContosoConcertHall_old name.
The demo script opens the events page for this new tenant database in your browser. Note from the URL
http://events.wingtip-dpt.&lt;user&gt;.trafficmanager.net/contosoconcerthall_old that this page shows data
from the restored database where _old is added to the name.
Scroll the events listed in the browser to confirm that the event deleted in the previous section was restored.
Exposing the restored tenant as an additional tenant, with its own Events app, is unlikely to be how you provide
a tenant access to restored data. It serves to illustrate the restore pattern. Typically, you give read-only access to
the old data and retain the restored database for a defined period. In the sample, you can delete the restored
tenant entry after you're finished by running the Remove restored tenant scenario.
1. Set $DemoScenario = 4 , Remove restored tenant.
2. To run the script, press F5.
3. The ContosoConcertHall_old entry is now deleted from the catalog. Close the events page for this tenant in
your browser.

Restore a tenant in place, replacing the existing tenant database


This exercise restores the Contoso Concert Hall tenant to a point before the event was deleted. The Restore-
TenantInPlace script restores a tenant database to a new database and deletes the original. This restore pattern is
best suited to recovering from serious data corruption, and the tenant might have to accommodate significant
data loss.
1. In the PowerShell ISE, open the Demo-RestoreTenant.ps1 file.
2. Set $DemoScenario = 5 , Restore tenant in place.
3. To run the script, press F5.
The script restores the tenant database to a point before the event was deleted. It first takes the Contoso Concert
Hall tenant off line to prevent further updates. Then, a parallel database is created by restoring from the restore
point. The restored database is named with a time stamp to make sure the database name doesn't conflict with
the existing tenant database name. Next, the old tenant database is deleted, and the restored database is
renamed to the original database name. Finally, Contoso Concert Hall is brought online to allow the app access
to the restored database.
You successfully restored the database to a point in time before the event was deleted. When the Events page
opens, confirm that the last event was restored.
After you restore the database, it takes another 10 to 15 minutes before the first full backup is available to
restore from again.

Next steps
In this tutorial, you learned how to:
Restore a database into a parallel database (side by side).
Restore a database in place.
Try the Manage tenant database schema tutorial.

Additional resources
Additional tutorials that build on the Wingtip SaaS application
Overview of business continuity with Azure SQL Database
Learn about SQL Database backups
Manage schema in a SaaS application using the
database-per-tenant pattern with Azure SQL
Database
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


As a database application evolves, changes inevitably need to be made to the database schema or reference
data. Database maintenance tasks are also needed periodically. Managing an application that uses the database
per tenant pattern requires that you apply these changes or maintenance tasks across a fleet of tenant
databases.
This tutorial explores two scenarios - deploying reference data updates for all tenants, and rebuilding an index
on the table containing the reference data. The Elastic jobs feature is used to execute these actions on all tenant
databases, and on the template database used to create new tenant databases.
In this tutorial you learn how to:
Create a job agent
Cause T-SQL jobs to be run on all tenant databases
Update reference data in all tenant databases
Create an index on a table in all tenant databases
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Database Per Tenant app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS database per tenant application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
The latest version of SQL Server Management Studio (SSMS) is installed. Download and Install SSMS

Introduction to SaaS schema management patterns


The database per tenant pattern isolates tenant data effectively, but increases the number of databases to
manage and maintain. Elastic Jobs facilitates administration and management of multiple databases. Jobs
enable you to securely and reliably, run tasks (T-SQL scripts) against a group of databases. Jobs can deploy
schema and common reference data changes across all tenant databases in an application. Elastic Jobs can also
be used to maintain a template database used to create new tenants, ensuring it always has the latest schema
and reference data.
Elastic Jobs public preview
There's a new version of Elastic Jobs that is now an integrated feature of Azure SQL Database. This new version
of Elastic Jobs is currently in public preview. This public preview currently supports using PowerShell to create a
job agent, and T-SQL to create and manage jobs. See article on Elastic Database Jobs for more information.

Get the Wingtip Tickets SaaS database per tenant application scripts
The application source code and management scripts are available in the WingtipTicketsSaaS-DbPerTenant
GitHub repo. Check out the general guidance for steps to download and unblock the Wingtip Tickets SaaS
scripts.

Create a job agent database and new job agent


This tutorial requires you use PowerShell to create a job agent and its backing job agent database. The job agent
database holds job definitions, job status, and history. Once the job agent and its database are created, you can
create and monitor jobs immediately.
1. In PowerShell ISE , open …\Learning Modules\Schema Management\Demo-SchemaManagement.ps1.
2. Press F5 to run the script.
The Demo-SchemaManagement.ps1 script calls the Deploy-SchemaManagement.ps1 script to create a database
named osagent on the catalog server. It then creates the job agent, using the database as a parameter.

Create a job to deploy new reference data to all tenants


In the Wingtip Tickets app, each tenant database includes a set of supported venue types. Each venue is of a
specific venue type, which defines the kind of events that can be hosted, and determines the background image
used in the app. For the application to support new kinds of events, this reference data must be updated and
new venue types added. In this exercise, you deploy an update to all the tenant databases to add two additional
venue types: Motorcycle Racing and Swimming Club.
First, review the venue types included in each tenant database. Connect to one of the tenant databases in SQL
Server Management Studio (SSMS) and inspect the VenueTypes table. You can also query this table in the Query
editor in the Azure portal, accessed from the database page.
1. Open SSMS and connect to the tenant server: tenants1-dpt-<user>.database.windows.net
2. To confirm that Motorcycle Racing and Swimming Club are not currently included, browse to the
contosoconcerthall database on the tenants1-dpt-<user> server and query the VenueTypes table.
Now let’s create a job to update the VenueTypes table in all the tenant databases to add the new venue types.
To create a new job, you use a set of jobs system stored procedures created in the jobagent database when the
job agent was created.
1. In SSMS, connect to the catalog server: catalog-dpt-<user>.database.windows.net server
2. In SSMS, open the file …\Learning Modules\Schema Management\DeployReferenceData.sql
3. Modify the statement: SET @wtpUser = <user> and substitute the User value used when you deployed the
Wingtip Tickets SaaS Database Per Tenant app
4. Ensure you are connected to the jobagent database and press F5 to run the script
Observe the following elements in the DeployReferenceData.sql script:
sp_add_target_group creates the target group name DemoServerGroup.
sp_add_target_group_member is used to define the set of target databases. First the tenants1-dpt-
<user> server is added. Adding the server as a target causes the databases in that server at the time of job
execution to be included in the job. Then the basetenantdb database and the adhocreporting database (used
in a later tutorial) are added as targets.
sp_add_job creates a job named Reference Data Deployment.
sp_add_jobstep creates the job step containing T-SQL command text to update the reference table,
VenueTypes.
The remaining views in the script display the existence of the objects and monitor job execution. Use these
queries to review the status value in the lifecycle column to determine when the job has finished on all the
target databases.
Once the script has completed, you can verify the reference data has been updated. In SSMS, browse to the
contosoconcerthall database on the tenants1-dpt-<user> server and query the VenueTypes table. Check that
Motorcycle Racing and Swimming Club are now present.

Create a job to manage the reference table index


This exercise uses a job to rebuild the index on the reference table primary key. This is a typical database
maintenance operation that might be done after loading large amounts of data.
Create a job using the same jobs 'system' stored procedures.
1. Open SSMS and connect to the catalog-dpt-<user>.database.windows.net server
2. Open the file …\Learning Modules\Schema Management\OnlineReindex.sql
3. Right click, select Connection, and connect to the catalog-dpt-<user>.database.windows.net server, if not
already connected
4. Ensure you are connected to the jobagent database and press F5 to run the script
Observe the following elements in the OnlineReindex.sql script:
sp_add_job creates a new job called “Online Reindex PK__VenueTyp__265E44FD7FD4C885”
sp_add_jobstep creates the job step containing T-SQL command text to update the index
The remaining views in the script monitor job execution. Use these queries to review the status value in the
lifecycle column to determine when the job has successfully finished on all target group members.

Next steps
In this tutorial you learned how to:
Create a job agent to run across T-SQL jobs multiple databases
Update reference data in all tenant databases
Create an index on a table in all tenant databases
Next, try the Ad hoc reporting tutorial to explore running distributed queries across tenant databases.

Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application deployment
Managing scaled-out cloud databases
Cross-tenant reporting using distributed queries
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you run distributed queries across the entire set of tenant databases for reporting. These queries
can extract insights buried in the day-to-day operational data of the Wingtip Tickets SaaS tenants. To do this, you
deploy an additional reporting database to the catalog server and use Elastic Query to enable distributed
queries.
In this tutorial you learn:
How to deploy an reporting database
How to run distributed queries across all tenant databases
How global views in each database can enable efficient querying across tenants
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS Database Per Tenant app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Database Per Tenant application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
SQL Server Management Studio (SSMS) is installed. To download and install SSMS, see Download SQL
Server Management Studio (SSMS).

Cross-tenant reporting pattern

One opportunity with SaaS applications is to use the vast amount of tenant data stored in the cloud to gain
insights into the operation and usage of your application. These insights can guide feature development,
usability improvements, and other investments in your apps and services.
Accessing this data in a single multi-tenant database is easy, but not so easy when distributed at scale across
potentially thousands of databases. One approach is to use Elastic Query, which enables querying across a
distributed set of databases with common schema. These databases can be distributed across different resource
groups and subscriptions, but need to share a common login. Elastic Query uses a single head database in
which external tables are defined that mirror tables or views in the distributed (tenant) databases. Queries
submitted to this head database are compiled to produce a distributed query plan, with portions of the query
pushed down to the tenant databases as needed. Elastic Query uses the shard map in the catalog database to
determine the location of all tenant databases. Setup and query of the head database are straightforward using
standard Transact-SQL, and support querying from tools like Power BI and Excel.
By distributing queries across the tenant databases, Elastic Query provides immediate insight into live
production data. As Elastic Query pulls data from potentially many databases, query latency can be higher than
equivalent queries submitted to a single multi-tenant database. Design queries to minimize the data that is
returned to the head database. Elastic Query is often best suited for querying small amounts of real-time data,
as opposed to building frequently used or complex analytics queries or reports. If queries don't perform well,
look at the execution plan to see what part of the query is pushed down to the remote database and how much
data is being returned. Queries that require complex aggregation or analytical processing may be better handles
by extracting tenant data into a database or data warehouse optimized for analytics queries. This pattern is
explained in the tenant analytics tutorial.

Get the Wingtip Tickets SaaS Database Per Tenant application scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-DbPerTenant GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.

Create ticket sales data


To run queries against a more interesting data set, create ticket sales data by running the ticket-generator.
1. In the PowerShell ISE, open the ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-
AdhocReporting.ps1 script and set the following value:
$DemoScenario = 1, Purchase tickets for events at all venues .
2. Press F5 to run the script and generate ticket sales. While the script is running, continue the steps in this
tutorial. The ticket data is queried in the Run ad hoc distributed queries section, so wait for the ticket
generator to complete.

Explore the global views


In the Wingtip Tickets SaaS Database Per Tenant application, each tenant is given a database. Thus, the data
contained in the database tables is scoped to the perspective of a single tenant. However, when querying across
all databases, it's important that Elastic Query can treat the data as if it is part of a single logical database
sharded by tenant.
To simulate this pattern, a set of 'global' views are added to the tenant database that project a tenant ID into
each of the tables that are queried globally. For example, the VenueEvents view adds a computed VenueId to the
columns projected from the Events table. Similarly, the VenueTicketPurchases and VenueTickets views add a
computed VenueId column projected from their respective tables. These views are used by Elastic Query to
parallelize queries and push them down to the appropriate remote tenant database when a VenueId column is
present. This dramatically reduces the amount of data that is returned and results in a substantial increase in
performance for many queries. These global views have been pre-created in all tenant databases.
1. Open SSMS and connect to the tenants1-<USER> server.
2. Expand Databases , right-click contosoconcerthall, and select New Quer y .
3. Run the following queries to explore the difference between the single-tenant tables and the global views:

-- The base Venue table, that has no VenueId associated.


SELECT * FROM Venue

-- Notice the plural name 'Venues'. This view projects a VenueId column.
SELECT * FROM Venues

-- The base Events table, which has no VenueId column.


SELECT * FROM Events

-- This view projects the VenueId retrieved from the Venues table.
SELECT * FROM VenueEvents

In these views, the VenueId is computed as a hash of the Venue name, but any approach could be used to
introduce a unique value. This approach is similar to the way the tenant key is computed for use in the catalog.
To examine the definition of the Venues view:
1. In Object Explorer , expand contosoconcer thall > Views :

2. Right-click dbo.Venues .
3. Select Script View as > CREATE To > New Quer y Editor Window
Script any of the other Venue views to see how they add the VenueId.

Deploy the database used for distributed queries


This exercise deploys the adhocreporting database. This is the head database that contains the schema used for
querying across all tenant databases. The database is deployed to the existing catalog server, which is the server
used for all management-related databases in the sample app.
1. in PowerShell ISE, open ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-
AdhocReporting.ps1.
2. Set $DemoScenario = 2 , Deploy Ad hoc reporting database.
3. Press F5 to run the script and create the adhocreporting database.
In the next section, you add schema to the database so it can be used to run distributed queries.

Configure the 'head' database for running distributed queries


This exercise adds schema (the external data source and external table definitions) to the adhocreporting
database to enable querying across all tenant databases.
1. Open SQL Server Management Studio, and connect to the Adhoc Reporting database you created in the
previous step. The name of the database is adhocreporting.
2. Open ...\Learning Modules\Operational Analytics\Adhoc Reporting\ Initialize-AdhocReportingDB.sql in
SSMS.
3. Review the SQL script and note:
Elastic Query uses a database-scoped credential to access each of the tenant databases. This credential
needs to be available in all the databases and should normally be granted the minimum rights required
to enable these queries.

With the catalog database as the external data source, queries are distributed to all databases registered
in the catalog at the time the query runs. As server names are different for each deployment, this script
gets the location of the catalog database from the current server (@@servername) where the script is
executed.

The external tables that reference the global views described in the previous section, and defined with
DISTRIBUTION = SHARDED(VenueId) . Because each VenueId maps to an individual database, this
improves performance for many scenarios as shown in the next section.
The local table VenueTypes that is created and populated. This reference data table is common in all
tenant databases, so it can be represented here as a local table and populated with the common data. For
some queries, having this table defined in the head database can reduce the amount of data that needs to
be moved to the head database.

If you include reference tables in this manner, be sure to update the table schema and data whenever you
update the tenant databases.
4. Press F5 to run the script and initialize the adhocreporting database.
Now you can run distributed queries, and gather insights across all tenants!

Run distributed queries


Now that the adhocreporting database is set up, go ahead and run some distributed queries. Include the
execution plan for a better understanding of where the query processing is happening.
When inspecting the execution plan, hover over the plan icons for details.
Important to note, is that setting DISTRIBUTION = SHARDED(VenueId) when the external data source is
defined improves performance for many scenarios. As each VenueId maps to an individual database, filtering is
easily done remotely, returning only the data needed.
1. Open ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-AdhocReportingQueries.sql in
SSMS.
2. Ensure you are connected to the adhocrepor ting database.
3. Select the Quer y menu and click Include Actual Execution Plan
4. Highlight the Which venues are currently registered? query, and press F5 .
The query returns the entire venue list, illustrating how quick, and easy it is to query across all tenants
and return data from each tenant.
Inspect the plan and see that the entire cost is in the remote query.Each tenant database executes the
query remotely and returns its venue information to the head database.

5. Select the next query, and press F5 .


This query joins data from the tenant databases and the local VenueTypes table (local, as it's a table in the
adhocreporting database).
Inspect the plan and see that the majority of cost is the remote query. Each tenant database returns its
venue info and performs a local join with the local VenueTypes table to display the friendly name.

6. Now select the On which day were the most tickets sold? query, and press F5 .
This query does a bit more complex joining and aggregation. Most of the processing occurs remotely.
Only single rows, containing each venue's daily ticket sale count per day, are returned to the head
database.

Next steps
In this tutorial you learned how to:
Run distributed queries across all tenant databases
Deploy a reporting database and define the schema required to run distributed queries.
Now try the Tenant Analytics tutorial to explore extracting data to a separate analytics database for more
complex analytics processing.

Additional resources
Additional tutorials that build upon the Wingtip Tickets SaaS Database Per Tenant application
Elastic Query
Cross-tenant analytics using extracted data - single-
tenant app
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you walk through a complete analytics scenario for a single tenant implementation. The scenario
demonstrates how analytics can enable businesses to make smart decisions. Using data extracted from each
tenant database, you use analytics to gain insights into tenant behavior, including their use of the sample
Wingtip Tickets SaaS application. This scenario involves three steps:
1. Extract data from each tenant database and Load into an analytics store.
2. Transform the extracted data for analytics processing.
3. Use business intelligence tools to draw out useful insights, which can guide decision making.
In this tutorial you learn how to:
Create the tenant analytics store to extract the data into.
Use elastic jobs to extract data from each tenant database into the analytics store.
Optimize the extracted data (reorganize into a star-schema).
Query the analytics database.
Use Power BI for data visualization to highlight trends in tenant data and make recommendation for
improvements.

Offline tenant analytics pattern


Multi-tenant SaaS applications typically have a vast amount of tenant data stored in the cloud. This data
provides a rich source of insights about the operation and usage of your application, and the behavior of your
tenants. These insights can guide feature development, usability improvements, and other investments in the
app and platform.
Accessing data for all tenants is simple when all the data is in just one multi-tenant database. But the access is
more complex when distributed at scale across potentially thousands of databases. One way to tame the
complexity and to minimize the impact of analytics queries on transactional data is to extract data into a purpose
designed analytics database or data warehouse.
This tutorial presents a complete analytics scenario for Wingtip Tickets SaaS application. First, Elastic Jobs is
used to extract data from each tenant database and load it into staging tables in an analytics store. The analytics
store could either be an SQL Database or a dedicated SQL pool. For large-scale data extraction, Azure Data
Factory is recommended.
Next, the aggregated data is transformed into a set of star-schema tables. The tables consist of a central fact
table plus related dimension tables. For Wingtip Tickets:
The central fact table in the star-schema contains ticket data.
The dimension tables describe venues, events, customers, and purchase dates.
Together the central fact and dimension tables enable efficient analytical processing. The star-schema used in
this tutorial is shown in the following image:

Finally, the analytics store is queried using Power BI to highlight insights into tenant behavior and their use of
the Wingtip Tickets application. You run queries that:
Show the relative popularity of each venue
Highlight patterns in ticket sales for different events
Show the relative success of different venues in selling out their event
Understanding how each tenant is using the service is used to explore options for monetizing the service and
improving the service to help tenants be more successful. This tutorial provides basic examples of the kinds of
insights that can be gleaned from tenant data.

Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Database Per Tenant application is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip SaaS application
The Wingtip Tickets SaaS Database Per Tenant scripts and application source code are downloaded from
GitHub. See download instructions. Be sure to unblock the zip file before extracting its contents. Check out
the general guidance for steps to download and unblock the Wingtip Tickets SaaS scripts.
Power BI Desktop is installed. Download Power BI Desktop
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
A job account and job account database have been created. See the appropriate steps in the Schema
management tutorial .
Create data for the demo
In this tutorial, analysis is performed on ticket sales data. In the current step, you generate ticket data for all the
tenants. Later this data is extracted for analysis. Ensure you have provisioned the batch of tenants as described
earlier, so that you have a meaningful amount of data. A sufficiently large amount of data can expose a range of
different ticket purchasing patterns.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for every event in each venue. The script runs
for several minutes to generate tens of thousands of tickets.
Deploy the analytics store
Often there are numerous transactional databases that together hold all tenant data. You must aggregate the
tenant data from the many transactional databases into one analytics store. The aggregation enables efficient
query of the data. In this tutorial, an Azure SQL Database is used to store the aggregated data.
In the following steps, you deploy the analytics store, which is called tenantanalytics . You also deploy
predefined tables that are populated later in the tutorial:
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1
2. Set the $DemoScenario variable in the script to match your choice of analytics store:
To use SQL Database without column store, set $DemoScenario = 2
To use SQL Database with column store, set $DemoScenario = 3
3. Press F5 to run the demo script (that calls the Deploy-TenantAnalytics<XX>.ps1 script) which creates the
tenant analytics store.
Now that you have deployed the application and filled it with interesting tenant data, use SQL Server
Management Studio (SSMS) to connect tenants1-dpt-<User> and catalog-dpt-<User> servers using Login
= developer, Password = P@ssword1. See the introductory tutorial for more guidance.
In the Object Explorer, perform the following steps:
1. Expand the tenants1-dpt-<User> server.
2. Expand the Databases node, and see the list of tenant databases.
3. Expand the catalog-dpt-<User> server.
4. Verify that you see the analytics store and the jobaccount database.
See the following database items in the SSMS Object Explorer by expanding the analytics store node:
Tables TicketsRawData and EventsRawData hold raw extracted data from the tenant databases.
The star-schema tables are fact_Tickets , dim_Customers , dim_Venues , dim_Events , and dim_Dates .
The stored procedure is used to populate the star-schema tables from the raw data tables.

Data extraction
Create target groups
Before proceeding, ensure you have deployed the job account and jobaccount database. In the next set of steps,
Elastic Jobs is used to extract data from each tenant database, and to store the data in the analytics store. Then
the second job shreds the data and stores it into tables in the star-schema. These two jobs run against two
different target groups, namely TenantGroup and AnalyticsGroup . The extract job runs against the
TenantGroup, which contains all the tenant databases. The shredding job runs against the AnalyticsGroup, which
contains just the analytics store. Create the target groups by using the following steps:
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ TargetGroups.sql
3. Modify the @User variable at the top of the script, replacing <User> with the user value used when you
deployed the Wingtip SaaS app.
4. Press F5 to run the script that creates the two target groups.
Extract raw data from all tenants
Extensive data modifications might occur more frequently for ticket and customer data than for event and venue
data. Therefore, consider extracting ticket and customer data separately and more frequently than you extract
event and venue data. In this section, you define and schedule two separate jobs:
Extract ticket and customer data.
Extract event and venue data.
Each job extracts its data, and posts it into the analytics store. There a separate job shreds the extracted data into
the analytics star-schema.
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User> server.
2. In SSMS, open ...\Learning Modules\Operational Analytics\Tenant Analytics\ExtractTickets.sql.
3. Modify @User at the top of the script, and replace <User> with the user name used when you deployed the
Wingtip SaaS app
4. Press F5 to run the script that creates and runs the job that extracts tickets and customers data from each
tenant database. The job saves the data into the analytics store.
5. Query the TicketsRawData table in the tenantanalytics database, to ensure that the table is populated with
tickets information from all tenants.

Repeat the preceding steps, except this time replace \ExtractTickets.sql with \ExtractVenuesEvents.sql in
step 2.
Successfully running the job populates the EventsRawData table in the analytics store with new events and
venues information from all tenants.

Data reorganization
Shred extracted data to populate star-schema tables
The next step is to shred the extracted raw data into a set of tables that are optimized for analytics queries. A
star-schema is used. A central fact table holds individual ticket sales records. Other tables are populated with
related data about venues, events, and customers. And there are time dimension tables.
In this section of the tutorial, you define and run a job that merges the extracted raw data with the data in the
star-schema tables. After the merge job is finished, the raw data is deleted, leaving the tables ready to be
populated by the next tenant data extract job.
1. In SSMS, connect to the jobaccount database in catalog-dpt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ShredRawExtractedData.sql.
3. Press F5 to run the script to define a job that calls the sp_ShredRawExtractedData stored procedure in the
analytics store.
4. Allow enough time for the job to run successfully.
Check the Lifecycle column of jobs.jobs_execution table for the status of job. Ensure that the job
Succeeded before proceeding. A successful run displays data similar to the following chart:

Data exploration
Visualize tenant data
The data in the star-schema table provides all the ticket sales data needed for your analysis. To make it easier to
see trends in large data sets, you need to visualize it graphically. In this section, you learn how to use Power BI
to manipulate and visualize the tenant data you have extracted and organized.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database.
4. In the database login window, enter your server name (catalog-dpt-<User>.database.windows.net). Select
Impor t for Data Connectivity Mode , and then click OK.
5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .

6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets,
dim_Events, dim_Venues, dim_Customers and dim_Dates. Then select Load .
Congratulations! You have successfully loaded the data into Power BI. Now you can start exploring interesting
visualizations to help gain insights into your tenants. Next you walk through how analytics can enable you to
provide data-driven recommendations to the Wingtip Tickets business team. The recommendations can help to
optimize the business model and customer experience.
You start by analyzing ticket sales data to see the variation in usage across the venues. Select the following
options in Power BI to plot a bar chart of the total number of tickets sold by each venue. Due to random
variation in the ticket generator, your results may be different.
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the following options in Power BI
to plot the total number of tickets sold each day for a period of 60 days.

The preceding chart displays that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next you want to further investigate the significance of these peak sale days. When do these peaks occur after
tickets go on sale? To plot tickets sold per day, select the following options in Power BI.
The preceding plot shows that some venues sell a lot of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you observed that Contoso Concert Hall sells a lot of tickets, and that Contoso also has a spike in ticket
sales on certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern?

The preceding plot for Contoso Concert Hall shows that the mad rush does not happen for all events. Play
around with the filter options to see sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for underperforming venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :

AverageTicketsSold = AVERAGEX( SUMMARIZE( TableName, TableName[Venue Name] ), CALCULATE(


SUM(TableName[Tickets Sold] ) ) )

Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.

The preceding plot shows that even though most venues sell more than 80% of their tickets, some are
struggling to fill more than half the seats. Play around with the Values Well to select maximum or minimum
percentage of tickets sold for each venue.
Earlier you deepened your analysis to discover that ticket sales tend to follow predictable patterns. This
discovery might let Wingtip Tickets help underperforming venues boost ticket sales by recommending dynamic
pricing. This discover could reveal an opportunity to employ machine learning techniques to predict ticket sales
for each event. Predictions could also be made for the impact on revenue of offering discounts on ticket sales.
Power BI Embedded could be integrated into an event management application. The integration could help
visualize predicted sales and the effect of different discounts. The application could help devise an optimum
discount to be applied directly from the analytics display.
You have observed trends in tenant data from the WingTip application. You can contemplate other ways the app
can inform business decisions for SaaS application vendors. Vendors can better cater to the needs of their
tenants. Hopefully this tutorial has equipped you with tools necessary to perform analytics on tenant data to
empower your businesses to make data-driven decisions.

Next steps
In this tutorial, you learned how to:
Deployed a tenant analytics database with pre-defined star schema tables
Used elastic jobs to extract data from all the tenant database
Merge the extracted data into tables in a star-schema designed for analytics
Query an analytics database
Use Power BI for data visualization to observe trends in tenant data
Congratulations!

Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Elastic Jobs.
Cross-tenant analytics using extracted data - multi-tenant app
Explore SaaS analytics with Azure SQL Database,
Azure Synapse Analytics, Data Factory, and Power
BI
7/12/2022 • 15 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you walk through an end-to-end analytics scenario. The scenario demonstrates how analytics
over tenant data can empower software vendors to make smart decisions. Using data extracted from each
tenant database, you use analytics to gain insights into tenant behavior, including their use of the sample
Wingtip Tickets SaaS application. This scenario involves three steps:
1. Extract data from each tenant database into an analytics store, in this case, a dedicated SQL pool.
2. Optimize the extracted data for analytics processing.
3. Use Business Intelligence tools to draw out useful insights, which can guide decision making.
In this tutorial you learn how to:
Create the tenant analytics store for loading.
Use Azure Data Factory (ADF) to extract data from each tenant database into the analytics data warehouse.
Optimize the extracted data (reorganize into a star-schema).
Query the analytics data warehouse.
Use Power BI for data visualization to highlight trends in tenant data and make recommendation for
improvements.

Analytics over extracted tenant data


SaaS applications hold a potentially vast amount of tenant data in the cloud. This data can provide a rich source
of insights about the operation and usage of your application, and the behavior of your tenants. These insights
can guide feature development, usability improvements, and other investments in the apps and platform.
Accessing the data for all tenants is simple when all the data is in just one multi-tenant database. But access is
more complex when distributed at scale across thousands of databases. One way to tame the complexity is to
extract the data to an analytics database or a data warehouse for query.
This tutorial presents an end-to-end analytics scenario for the Wingtip Tickets application. First, Azure Data
Factory (ADF) is used as the orchestration tool to extract tickets sales and related data from each tenant
database. This data is loaded into staging tables in an analytics store. The analytics store could either be a SQL
Database or a dedicated SQL pool. This tutorial uses Azure Synapse Analytics as the analytics store.
Next, the extracted data is transformed and loaded into a set of star-schema tables. The tables consist of a
central fact table plus related dimension tables:
The central fact table in the star-schema contains ticket data.
The dimension tables contain data about venues, events, customers, and purchase dates.
Together the central and dimension tables enable efficient analytical processing. The star-schema used in this
tutorial is displayed in the following image:

Finally, the star-schema tables are queried. Query results are displayed visually using Power BI to highlight
insights into tenant behavior and their use of the application. With this star-schema, you run queries that expose:
Who is buying tickets and from which venue.
Patterns and trends in the sale of tickets.
The relative popularity of each venue.
This tutorial provides basic examples of insights that can be gleaned from the Wingtip Tickets data.
Understanding how each venue uses the service might cause the Wingtip Tickets vendor to think about different
service plans targeted at more or less active venues, for example.

Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Database Per Tenant application is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip SaaS application.
The Wingtip Tickets SaaS Database Per Tenant scripts and application source code are downloaded from
GitHub. See download instructions. Be sure to unblock the zip file before extracting its contents.
Power BI Desktop is installed. Download Power BI Desktop.
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
Create data for the demo
This tutorial explores analytics over ticket sales data. In this step, you generate ticket data for all the tenants. In a
later step, this data is extracted for analysis. Ensure you provisioned the batch of tenants (as described earlier) so
that you have enough data to expose a range of different ticket purchasing patterns.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics DW\Demo-
TenantAnalyticsDW.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for all the venues. With 20 tenants, the script
generates tens of thousands of tickets and may take 10 minutes or more.
Deploy Azure Synapse Analytics, Data Factory, and Blob Storage
In the Wingtip Tickets app, the tenants' transactional data is distributed over many databases. Azure Data
Factory (ADF) is used to orchestrate the Extract, Load, and Transform (ELT) of this data into the data warehouse.
To load data into Azure Synapse Analytics most efficiently, ADF extracts data into intermediate blob files and
then uses PolyBase to load the data into the data warehouse.
In this step, you deploy the additional resources used in the tutorial: a dedicated SQL pool called tenantanalytics,
an Azure Data Factory called dbtodwload-<user>, and an Azure storage account called wingtipstaging<user>.
The storage account is used to temporarily hold extracted data files as blobs before they are loaded into the data
warehouse. This step also deploys the data warehouse schema and defines the ADF pipelines that orchestrate
the ELT process.
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics DW\Demo-
TenantAnalyticsDW.ps1 and set:
$DemoScenario = 2 Deploy tenant analytics data warehouse, blob storage, and data factory
2. Press F5 to run the demo script and deploy the Azure resources.
Now review the Azure resources you deployed:
Tenant databases and analytics store
Use SQL Server Management Studio (SSMS) to connect to tenants1-dpt-<user> and catalog-dpt-<user>
servers. Replace <user> with the value used when you deployed the app. Use Login = developer and Password
= P@ssword1. See the introductory tutorial for more guidance.

In the Object Explorer:


1. Expand the tenants1-dpt-<user> server.
2. Expand the Databases node, and see the list of tenant databases.
3. Expand the catalog-dpt-<user> server.
4. Verify that you see the analytics store containing the following objects:
a. Tables raw_Tickets , raw_Customers , raw_Events and raw_Venues hold raw extracted data from
the tenant databases.
b. The star-schema tables are fact_Tickets , dim_Customers , dim_Venues , dim_Events , and
dim_Dates .
c. The stored procedure, sp_transformExtractedData is used to transform the data and load it into the
star-schema tables.

Blob storage
1. In the Azure portal, navigate to the resource group that you used for deploying the application. Verify that
a storage account called wingtipstaging<user> has been added.

2. Click wingtipstaging<user> storage account to explore the objects present.


3. Click Blobs tile
4. Click the container configfile
5. Verify that configfile contains a JSON file called TableConfig.json . This file contains the source and
destination table names, column names, and tracker column name.
Azure Data Factory (ADF)
In the Azure portal in the resource group, verify that an Azure Data Factory called dbtodwload-<user> has been
added.

This section explores the data factory created. Follow the steps below to launch the data factory:
1. In the portal, click the data factory called dbtodwload-<user> .
2. Click Author & Monitor tile to launch the Data Factory designer in a separate tab.

Extract, Load, and Transform data


Azure Data Factory is used for orchestrating extraction, loading, and transformation of data. In this tutorial, you
extract data from four different SQL views from each of the tenant databases: rawTickets , rawCustomers ,
rawEvents , and rawVenues . These views include venue ID, so you can discriminate data from each venue in
the data warehouse. The data is loaded into corresponding staging tables in the data warehouse: raw_Tickets ,
raw_customers , raw_Events and raw_Venue . A stored procedure then transforms the raw data and
populates the star-schema tables: fact_Tickets , dim_Customers , dim_Venues , dim_Events , and dim_Dates .
In the previous section, you deployed and initialized the necessary Azure resources, including the data factory.
The deployed data factory includes the pipelines, datasets, linked services, etc., required to extract, load, and
transform the tenant data. Let's explore these objects further and then trigger the pipeline to move data from
tenant databases to the data warehouse.
Data factory pipeline overview
This section explores the objects created in the data factory. The following figure describes the overall workflow
of the ADF pipeline used in this tutorial. If you prefer to explore the pipeline later and see the results first, skip to
the next section Trigger the pipeline run .

In the overview page, switch to Author tab on the left panel and observe that there are three pipelines and
three datasets created.

The three nested pipelines are: SQLDBToDW, DBCopy, and TableCopy.


Pipeline 1 - SQLDBToDW looks up the names of the tenant databases stored in the Catalog database (table
name: [__ShardManagement].[ShardsGlobal]) and for each tenant database, executes the DBCopy pipeline.
Upon completion, the provided sp_TransformExtractedData stored procedure schema, is executed. This
stored procedure transforms the loaded data in the staging tables and populates the star-schema tables.
Pipeline 2 - DBCopy looks up the names of the source tables and columns from a configuration file stored in
blob storage. The TableCopy pipeline is then run for each of the four tables: TicketFacts, CustomerFacts,
EventFacts, and VenueFacts. The Foreach activity executes in parallel for all 20 databases. ADF allows a
maximum of 20 loop iterations to be run in parallel. Consider creating multiple pipelines for more databases.
Pipeline 3 - TableCopy uses row version numbers in SQL Database (rowversion) to identify rows that have
been changed or updated. This activity looks up the start and the end row version for extracting rows from the
source tables. The CopyTracker table stored in each tenant database tracks the last row extracted from each
source table in each run. New or changed rows are copied to the corresponding staging tables in the data
warehouse: raw_Tickets , raw_Customers , raw_Venues , and raw_Events . Finally the last row version is
saved in the CopyTracker table to be used as the initial row version for the next extraction.
There are also three parameterized linked services that link the data factory to the source SQL Databases, the
target dedicated SQL pool, and the intermediate Blob storage. In the Author tab, click on Connections to
explore the linked services, as shown in the following image:

Corresponding to the three linked services, there are three datasets that refer to the data you use in the pipeline
activities as inputs or outputs. Explore each of the datasets to observe connections and parameters used.
AzureBlob points to the configuration file containing source and target tables and columns, as well as the tracker
column in each source.
Data warehouse pattern overview
Azure Synapse is used as the analytics store to perform aggregation on the tenant data. In this sample, PolyBase
is used to load data into the data warehouse. Raw data is loaded into staging tables that have an identity column
to keep track of rows that have been transformed into the star-schema tables. The following image shows the
loading pattern:
Slowly Changing Dimension (SCD) type 1 dimension tables are used in this example. Each dimension has a
surrogate key defined using an identity column. As a best practice, the date dimension table is pre-populated to
save time. For the other dimension tables, a CREATE TABLE AS SELECT... (CTAS) statement is used to create a
temporary table containing the existing modified and non-modified rows, along with the surrogate keys. This is
done with IDENTITY_INSERT=ON. New rows are then inserted into the table with IDENTITY_INSERT=OFF. For
easy roll-back, the existing dimension table is renamed and the temporary table is renamed to become the new
dimension table. Before each run, the old dimension table is deleted.
Dimension tables are loaded before the fact table. This sequencing ensures that for each arriving fact, all
referenced dimensions already exist. As the facts are loaded, the business key for each corresponding dimension
is matched and the corresponding surrogate keys are added to each fact.
The final step of the transform deletes the staging data ready for the next execution of the pipeline.
Trigger the pipeline run
Follow the steps below to run the complete extract, load, and transform pipeline for all the tenant databases:
1. In the Author tab of the ADF user interface, select SQLDBToDW pipeline from the left pane.
2. Click Trigger and from the pulled down menu click Trigger Now . This action runs the pipeline immediately.
In a production scenario, you would define a timetable for running the pipeline to refresh the data on a
schedule.

3. On Pipeline Run page, click Finish .


Monitor the pipeline run
1. In the ADF user interface, switch to the Monitor tab from the menu on the left.
2. Click Refresh until SQLDBToDW pipeline's status is Succeeded .

3. Connect to the data warehouse with SSMS and query the star-schema tables to verify that data was loaded in
these tables.
Once the pipeline has completed, the fact table holds ticket sales data for all venues and the dimension tables
are populated with the corresponding venues, events, and customers.

Data Exploration
Visualize tenant data
The data in the star-schema provides all the ticket sales data needed for your analysis. Visualizing data
graphically makes it easier to see trends in large data sets. In this section, you use Power BI to manipulate and
visualize the tenant data in the data warehouse.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database .
4. In the database login window, enter your server name (catalog-dpt-<User>.database.windows.net ).
Select Impor t for Data Connectivity Mode , and then click OK .

5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .

6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets ,
dim_Events , dim_Venues , dim_Customers and dim_Dates . Then select Load .
Congratulations! You successfully loaded the data into Power BI. Now explore interesting visualizations to gain
insights into your tenants. Let's walk through how analytics can provide some data-driven recommendations to
the Wingtip Tickets business team. The recommendations can help to optimize the business model and
customer experience.
Start by analyzing ticket sales data to see the variation in usage across the venues. Select the options shown in
Power BI to plot a bar chart of the total number of tickets sold by each venue. (Due to random variation in the
ticket generator, your results may be different.)
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the options shown in the
following image in Power BI to plot the total number of tickets sold each day for a period of 60 days.

The preceding chart shows that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next let's investigate the significance of these peak sale days. When do these peaks occur after tickets go on
sale? To plot tickets sold per day, select the options shown in the following image in Power BI.
This plot shows that some venues sell large numbers of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you saw that Contoso Concert Hall sells many tickets, and that Contoso also has a spike in ticket sales on
certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern? Try to produce a plot
like the one below.

This plot of cumulative ticket sales over time for Contoso Concert Hall for each event shows that the mad rush
does not happen for all events. Play around with the filter options to explore sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for underperforming venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :

AverageTicketsSold = DIVIDE(DIVIDE(COUNTROWS(fact_Tickets),DISTINCT(dim_Venues[VenueCapacity]))*100,
COUNTROWS(dim_Events))

Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.

The plot above shows that even though most venues sell more than 80% of their tickets, some are struggling to
fill more than half their seats. Play around with the Values Well to select maximum or minimum percentage of
tickets sold for each venue.

Embedding analytics in your apps


This tutorial has focused on cross-tenant analytics used to improve the software vendor's understanding of their
tenants. Analytics can also provide insights to the tenants, to help them manage their business more effectively
themselves.
In the Wingtip Tickets example, you earlier discovered that ticket sales tend to follow predictable patterns. This
insight might be used to help underperforming venues boost ticket sales. Perhaps there is an opportunity to
employ machine learning techniques to predict ticket sales for events. The effects of price changes could also be
modeled, to allow the impact of offering discounts to be predicted. Power BI Embedded could be integrated into
an event management application to visualize predicted sales, including the impact of discounts on total seats
sold and revenue on low-selling events. With Power BI Embedded, you can even integrate actually applying the
discount to the ticket prices, right in the visualization experience.

Next steps
In this tutorial, you learned how to:
Create the tenant analytics store for loading.
Use Azure Data Factory (ADF) to extract data from each tenant database into the analytics data warehouse.
Optimize the extracted data (reorganize into a star-schema).
Query the analytics data warehouse.
Use Power BI for data visualization to highlight trends in tenant data and make recommendation for
improvements.
Congratulations!

Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Use geo-restore to recover a multitenant SaaS
application from database backups
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This tutorial explores a full disaster recovery scenario for a multitenant SaaS application implemented with the
database per tenant model. You use geo-restore to recover the catalog and tenant databases from automatically
maintained geo-redundant backups into an alternate recovery region. After the outage is resolved, you use geo-
replication to repatriate changed databases to their original region.

Geo-restore is the lowest-cost disaster recovery solution for Azure SQL Database. However, restoring from geo-
redundant backups can result in data loss of up to one hour. It can take considerable time, depending on the size
of each database.

NOTE
Recover applications with the lowest possible RPO and RTO by using geo-replication instead of geo-restore.

This tutorial explores both restore and repatriation workflows. You learn how to:
Sync database and elastic pool configuration info into the tenant catalog.
Set up a mirror image environment in a recovery region that includes application, servers, and pools.
Recover catalog and tenant databases by using geo-restore.
Use geo-replication to repatriate the tenant catalog and changed tenant databases after the outage is
resolved.
Update the catalog as each database is restored (or repatriated) to track the current location of the active
copy of each tenant's database.
Ensure that the application and tenant database are always co-located in the same Azure region to reduce
latency.
Before you start this tutorial, complete the following prerequisites:
Deploy the Wingtip Tickets SaaS database per tenant app. To deploy in less than five minutes, see Deploy and
explore the Wingtip Tickets SaaS database per tenant application.
Install Azure PowerShell. For details, see Getting started with Azure PowerShell.

Introduction to the geo-restore recovery pattern


Disaster recovery (DR) is an important consideration for many applications, whether for compliance reasons or
business continuity. If there's a prolonged service outage, a well-prepared DR plan can minimize business
disruption. A DR plan based on geo-restore must accomplish several goals:
Reserve all needed capacity in the chosen recovery region as quickly as possible to ensure that it's available
to restore tenant databases.
Establish a mirror image recovery environment that reflects the original pool and database configuration.
Allow cancellation of the restore process in mid-flight if the original region comes back online.
Enable tenant provisioning quickly so new tenant onboarding can restart as soon as possible.
Be optimized to restore tenants in priority order.
Be optimized to get tenants online as soon as possible by doing steps in parallel where practical.
Be resilient to failure, restartable, and idempotent.
Repatriate databases to their original region with minimal impact to tenants when the outage is resolved.

NOTE
The application is recovered into the paired region of the region in which the application is deployed. For more
information, see Azure paired regions.

This tutorial uses features of Azure SQL Database and the Azure platform to address these challenges:
Azure Resource Manager templates, to reserve all needed capacity as quickly as possible. Azure Resource
Manager templates are used to provision a mirror image of the original servers and elastic pools in the
recovery region. A separate server and pool are also created for provisioning new tenants.
Elastic Database Client Library (EDCL), to create and maintain a tenant database catalog. The extended
catalog includes periodically refreshed pool and database configuration information.
Shard management recovery features of the EDCL, to maintain database location entries in the catalog
during recovery and repatriation.
Geo-restore, to recover the catalog and tenant databases from automatically maintained geo-redundant
backups.
Asynchronous restore operations, sent in tenant-priority order, are queued for each pool by the system and
processed in batches so the pool isn't overloaded. These operations can be canceled before or during
execution if necessary.
Geo-replication, to repatriate databases to the original region after the outage. There is no data loss and
minimal impact on the tenant when you use geo-replication.
SQL server DNS aliases, to allow the catalog sync process to connect to the active catalog regardless of its
location.

Get the disaster recovery scripts


The DR scripts used in this tutorial are available in the Wingtip Tickets SaaS database per tenant GitHub
repository. Check out the general guidance for steps to download and unblock the Wingtip Tickets management
scripts.
IMPORTANT
Like all the Wingtip Tickets management scripts, the DR scripts are sample quality and are not to be used in production.

Review the healthy state of the application


Before you start the recovery process, review the normal healthy state of the application.
1. In your web browser, open the Wingtip Tickets events hub (http://events.wingtip-dpt.
<user>.trafficmanager.net, replace <user> with your deployment's user value).
Scroll to the bottom of the page and notice the catalog server name and location in the footer. The
location is the region in which you deployed the app.

TIP
Hover the mouse over the location to enlarge the display.

2. Select the Contoso Concert Hall tenant and open its event page.
In the footer, notice the tenant's server name. The location is the same as the catalog server's location.

3. In the Azure portal, review and open the resource group in which you deployed the app.
Notice the resources and the region in which the app service components and SQL Database is deployed.

Sync the tenant configuration into the catalog


In this task, you start a process to sync the configuration of the servers, elastic pools, and databases into the
tenant catalog. This information is used later to configure a mirror image environment in the recovery region.

IMPORTANT
For simplicity, the sync process and other long-running recovery and repatriation processes are implemented in these
samples as local PowerShell jobs or sessions that run under your client user login. The authentication tokens issued when
you log in expire after several hours, and the jobs will then fail. In a production scenario, long-running processes should
be implemented as reliable Azure services of some kind, running under a service principal. See Use Azure PowerShell to
create a service principal with a certificate.

1. In the PowerShell ISE, open the ...\Learning Modules\UserConfig.psm1 file. Replace <resourcegroup> and
<user> on lines 10 and 11 with the value used when you deployed the app. Save the file.

2. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script.
In this tutorial, you run each of the scenarios in this PowerShell script, so keep this file open.
3. Set the following:
$DemoScenario = 1: Start a background job that syncs tenant server and pool configuration info into the
catalog.
4. To run the sync script, select F5.
This information is used later to ensure that recovery creates a mirror image of the servers, pools, and
databases in the recovery region.

Leave the PowerShell window running in the background and continue with the rest of this tutorial.

NOTE
The sync process connects to the catalog via a DNS alias. The alias is modified during restore and repatriation to point to
the active catalog. The sync process keeps the catalog up to date with any database or pool configuration changes made
in the recovery region. During repatriation, these changes are applied to the equivalent resources in the original region.

Geo-restore recovery process overview


The geo-restore recovery process deploys the application and restores databases from backups into the
recovery region.
The recovery process does the following:
1. Disables the Azure Traffic Manager endpoint for the web app in the original region. Disabling the
endpoint prevents users from connecting to the app in an invalid state should the original region come
online during recovery.
2. Provisions a recovery catalog server in the recovery region, geo-restores the catalog database, and
updates the activecatalog alias to point to the restored catalog server. Changing the catalog alias ensures
that the catalog sync process always syncs to the active catalog.
3. Marks all existing tenants in the recovery catalog as offline to prevent access to tenant databases before
they are restored.
4. Provisions an instance of the app in the recovery region and configures it to use the restored catalog in
that region. To keep latency to a minimum, the sample app is designed to always connect to a tenant
database in the same region.
5. Provisions a server and elastic pool in which new tenants are provisioned. Creating these resources
ensures that provisioning new tenants doesn't interfere with the recovery of existing tenants.
6. Updates the new tenant alias to point to the server for new tenant databases in the recovery region.
Changing this alias ensures that databases for any new tenants are provisioned in the recovery region.
7. Provisions servers and elastic pools in the recovery region for restoring tenant databases. These servers
and pools are a mirror image of the configuration in the original region. Provisioning pools up front
reserves the capacity needed to restore all the databases.
An outage in a region might place significant pressure on the resources available in the paired region. If
you rely on geo-restore for DR, then reserving resources quickly is recommended. Consider geo-
replication if it's critical that an application is recovered in a specific region.
8. Enables the Traffic Manager endpoint for the web app in the recovery region. Enabling this endpoint
allows the application to provision new tenants. At this stage, existing tenants are still offline.
9. Submits batches of requests to restore databases in priority order.
Batches are organized so that databases are restored in parallel across all pools.
Restore requests are submitted asynchronously so they are submitted quickly and queued for
execution in each pool.
Because restore requests are processed in parallel across all pools, it's better to distribute
important tenants across many pools.
10. Monitors the service to determine when databases are restored. After a tenant database is restored, it's
marked online in the catalog, and a rowversion sum for the tenant database is recorded.
Tenant databases can be accessed by the application as soon as they're marked online in the
catalog.
A sum of rowversion values in the tenant database is stored in the catalog. This sum acts as a
fingerprint that allows the repatriation process to determine if the database was updated in the
recovery region.

Run the recovery script


IMPORTANT
This tutorial restores databases from geo-redundant backups. Although these backups are typically available within 10
minutes, it can take up to an hour. The script pauses until they're available.
Imagine there's an outage in the region in which the application is deployed, and run the recovery script:
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following value:
$DemoScenario = 2: Recover the app into a recovery region by restoring from geo-redundant backups.
2. To run the script, select F5.
The script opens in a new PowerShell window and then starts a set of PowerShell jobs that run in
parallel. These jobs restore servers, pools, and databases to the recovery region.
The recovery region is the paired region associated with the Azure region in which you deployed
the application. For more information, see Azure paired regions.
3. Monitor the status of the recovery process in the PowerShell window.

NOTE
To explore the code for the recovery jobs, review the PowerShell scripts in the ...\Learning Modules\Business Continuity
and Disaster Recovery\DR-RestoreFromBackup\RecoveryJobs folder.

Review the application state during recovery


While the application endpoint is disabled in Traffic Manager, the application is unavailable. The catalog is
restored, and all the tenants are marked offline. The application endpoint in the recovery region is then enabled,
and the application is back online. Although the application is available, tenants appear offline in the events hub
until their databases are restored. It's important to design your application to handle offline tenant databases.
After the catalog database has been recovered but before the tenants are back online, refresh the Wingtip
Tickets events hub in your web browser.
In the footer, notice that the catalog server name now has a -recovery suffix and is located in the
recovery region.
Notice that tenants that are not yet restored are marked as offline and are not selectable.
If you open a tenant's events page directly while the tenant is offline, the page displays a tenant
offline notification. For example, if Contoso Concert Hall is offline, try to open
http://events.wingtip-dpt.<user>.trafficmanager.net/contosoconcerthall.

Provision a new tenant in the recovery region


Even before tenant databases are restored, you can provision new tenants in the recovery region. New tenant
databases provisioned in the recovery region are repatriated with the recovered databases later.
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following property:
$DemoScenario = 3: Provision a new tenant in the recovery region.
2. To run the script, select F5.
3. The Hawthorn Hall events page opens in the browser when provisioning finishes.
Notice that the Hawthorn Hall database is located in the recovery region.
4. In the browser, refresh the Wingtip Tickets events hub page to see Hawthorn Hall included.
If you provisioned Hawthorn Hall without waiting for the other tenants to restore, other tenants might
still be offline.

Review the recovered state of the application


When the recovery process finishes, the application and all tenants are fully functional in the recovery region.
1. After the display in the PowerShell console window indicates all the tenants are recovered, refresh the
events hub.
The tenants all appear online, including the new tenant, Hawthorn Hall.
2. Click on Contoso Concert Hall and open its events page.
In the footer, notice that the database is located on the recovery server located in the recovery region.

3. In the Azure portal, open the list of resource groups.


Notice the resource group that you deployed, plus the recovery resource group, with the -recovery suffix.
The recovery resource group contains all the resources created during the recovery process, plus new
resources created during the outage.
4. Open the recovery resource group and notice the following items:
The recovery versions of the catalog and tenants1 servers, with the -recovery suffix. The restored
catalog and tenant databases on these servers all have the names used in the original region.
The tenants2-dpt-<user>-recovery SQL server. This server is used for provisioning new tenants
during the outage.
The app service named events-wingtip-dpt-<recoveryregion>-<user>, which is the recovery
instance of the events app.
5. Open the tenants2-dpt-<user>-recovery SQL server. Notice that it contains the database hawthornhall
and the elastic pool Pool1. The hawthornhall database is configured as an elastic database in the Pool1
elastic pool.

Change the tenant data


In this task, you update one of the restored tenant databases. The repatriation process copies restored databases
that have been changed to the original region.
1. In your browser, find the events list for the Contoso Concert Hall, scroll through the events, and notice the
last event, Seriously Strauss.
2. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set the following value:
$DemoScenario = 4: Delete an event from a tenant in the recovery region.
3. To execute the script, select F5.
4. Refresh the Contoso Concert Hall events page (http://events.wingtip-dpt.
<user>.trafficmanager.net/contosoconcerthall), and notice that the event Seriously Strauss is missing.
At this point in the tutorial, you have recovered the application, which is now running in the recovery region. You
have provisioned a new tenant in the recovery region and modified data of one of the restored tenants.

NOTE
Other tutorials in the sample are not designed to run with the app in the recovery state. If you want to explore other
tutorials, be sure to repatriate the application first.

Repatriation process overview


The repatriation process reverts the application and its databases to its original region after an outage is
resolved.
The process:
1. Stops any ongoing restore activity and cancels any outstanding or in-flight database restore requests.
2. Reactivates in the original region tenant databases that have not been changed since the outage. These
databases include those not recovered yet and those recovered but not changed afterward. The
reactivated databases are exactly as last accessed by their tenants.
3. Provisions a mirror image of the new tenant's server and elastic pool in the original region. After this
action is complete, the new tenant alias is updated to point to this server. Updating the alias causes new
tenant onboarding to occur in the original region instead of the recovery region.
4. Uses geo-replication to move the catalog to the original region from the recovery region.
5. Updates pool configuration in the original region so it's consistent with changes that were made in the
recovery region during the outage.
6. Creates the required servers and pools to host any new databases created during the outage.
7. Uses geo-replication to repatriate restored tenant databases that have been updated post-restore and all
new tenant databases provisioned during the outage.
8. Cleans up resources created in the recovery region during the restore process.
To limit the number of tenant databases that need to be repatriated, steps 1 to 3 are done promptly.
Step 4 is only done if the catalog in the recovery region has been modified during the outage. The catalog is
updated if new tenants are created or if any database or pool configuration is changed in the recovery region.
It's important that step 7 causes minimal disruption to tenants and no data is lost. To achieve this goal, the
process uses geo-replication.
Before each database is geo-replicated, the corresponding database in the original region is deleted. The
database in the recovery region is then geo-replicated, creating a secondary replica in the original region. After
replication is complete, the tenant is marked offline in the catalog, which breaks any connections to the database
in the recovery region. The database is then failed over, causing any pending transactions to process on the
secondary so no data is lost.
On failover, the database roles are reversed. The secondary in the original region becomes the primary read-
write database, and the database in the recovery region becomes a read-only secondary. The tenant entry in the
catalog is updated to reference the database in the original region, and the tenant is marked online. At this point,
repatriation of the database is complete.
Applications should be written with retry logic to ensure that they reconnect automatically when connections
are broken. When they use the catalog to broker the reconnection, they connect to the repatriated database in
the original region. Although the brief disconnect is often not noticed, you might choose to repatriate databases
out of business hours.
After a database is repatriated, the secondary database in the recovery region can be deleted. The database in
the original region then relies again on geo-restore for DR protection.
In step 8, resources in the recovery region, including the recovery servers and pools, are deleted.

Run the repatriation script


Let's imagine the outage is resolved and run the repatriation script.
If you've followed the tutorial, the script immediately reactivates Fabrikam Jazz Club and Dogwood Dojo in the
original region because they're unchanged. It then repatriates the new tenant, Hawthorn Hall, and Contoso
Concert Hall because it has been modified. The script also repatriates the catalog, which was updated when
Hawthorn Hall was provisioned.
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, verify that the Catalog Sync process is still
running in its PowerShell instance. If necessary, restart it by setting:
$DemoScenario = 1: Start synchronizing tenant server, pool, and database configuration info into the
catalog.
To run the script, select F5.
2. Then to start the repatriation process, set:
$DemoScenario = 5: Repatriate the app into its original region.
To run the recovery script in a new PowerShell window, select F5. Repatriation takes several minutes and
can be monitored in the PowerShell window.
3. While the script is running, refresh the events hub page (http://events.wingtip-dpt.
<user>.trafficmanager.net).
Notice that all the tenants are online and accessible throughout this process.
4. Select the Fabrikam Jazz Club to open it. If you didn't modify this tenant, notice from the footer that the
server is already reverted to the original server.
5. Open or refresh the Contoso Concert Hall events page. Notice from the footer that, initially, the database
is still on the -recovery server.
6. Refresh the Contoso Concert Hall events page when the repatriation process finishes, and notice that the
database is now in your original region.
7. Refresh the events hub again and open Hawthorn Hall. Notice that its database is also located in the
original region.

Clean up recovery region resources after repatriation


After repatriation is complete, it's safe to delete the resources in the recovery region.
IMPORTANT
Delete these resources promptly to stop all billing for them.

The restore process creates all the recovery resources in a recovery resource group. The cleanup process deletes
this resource group and removes all references to the resources from the catalog.
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
RestoreFromBackup\Demo-RestoreFromBackup.ps1 script, set:
$DemoScenario = 6: Delete obsolete resources from the recovery region.
2. To run the script, select F5.
After cleaning up the scripts, the application is back where it started. At this point, you can run the script again or
try out other tutorials.

Designing the application to ensure that the app and the database
are co-located
The application is designed to always connect from an instance in the same region as the tenant's database. This
design reduces latency between the application and the database. This optimization assumes the app-to-
database interaction is chattier than the user-to-app interaction.
Tenant databases might be spread across recovery and original regions for some time during repatriation. For
each database, the app looks up the region in which the database is located by doing a DNS lookup on the
tenant server name. The server name is an alias. The aliased server name contains the region name. If the
application isn't in the same region as the database, it redirects to the instance in the same region as the server.
Redirecting to the instance in the same region as the database minimizes latency between the app and the
database.

Next steps
In this tutorial, you learned how to:
Use the tenant catalog to hold periodically refreshed configuration information, which allows a mirror image
recovery environment to be created in another region.
Recover databases into the recovery region by using geo-restore.
Update the tenant catalog to reflect restored tenant database locations.
Use a DNS alias to enable an application to connect to the tenant catalog throughout without
reconfiguration.
Use geo-replication to repatriate recovered databases to their original region after an outage is resolved.
Try the Disaster recovery for a multitenant SaaS application using database geo-replication tutorial to learn how
to use geo-replication to dramatically reduce the time needed to recover a large-scale multitenant application.

Additional resources
Additional tutorials that build upon the Wingtip SaaS application
Disaster recovery for a multi-tenant SaaS
application using database geo-replication
7/12/2022 • 17 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you explore a full disaster recovery scenario for a multi-tenant SaaS application implemented
using the database-per-tenant model. To protect the app from an outage, you use geo-replication to create
replicas for the catalog and tenant databases in an alternate recovery region. If an outage occurs, you quickly fail
over to these replicas to resume normal business operations. On failover, the databases in the original region
become secondary replicas of the databases in the recovery region. Once these replicas come back online they
automatically catch up to the state of the databases in the recovery region. After the outage is resolved, you fail
back to the databases in the original production region.
This tutorial explores both the failover and failback workflows. You'll learn how to:
Sync database and elastic pool configuration info into the tenant catalog
Set up a recovery environment in an alternate region, comprising application, servers, and pools
Use geo-replication to replicate the catalog and tenant databases to the recovery region
Fail over the application and catalog and tenant databases to the recovery region
Later, fail over the application, catalog and tenant databases back to the original region after the outage is
resolved
Update the catalog as each tenant database is failed over to track the primary location of each tenant's
database
Ensure the application and primary tenant database are always colocated in the same Azure region to reduce
latency
Before starting this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS database per tenant app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS database per tenant application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell

Introduction to the geo-replication recovery pattern


Disaster recovery (DR) is an important consideration for many applications, whether for compliance reasons or
business continuity. Should there be a prolonged service outage, a well-prepared DR plan can minimize
business disruption. Using geo-replication provides the lowest RPO and RTO by maintaining database replicas in
a recovery region that can be failed over to at short notice.
A DR plan based on geo-replication comprises three distinct parts:
Set-up - creation and maintenance of the recovery environment
Recovery - failover of the app and databases to the recovery environment if an outage occurs,
Repatriation - failover of the app and databases back to the original region once the application is resolved
All parts have to be considered carefully, especially if operating at scale. Overall, the plan must accomplish
several goals:
Setup
Establish and maintain a mirror-image environment in the recovery region. Creating the elastic pools
and replicating any databases in this recovery environment reserves capacity in the recovery region.
Maintaining this environment includes replicating new tenant databases as they are provisioned.
Recovery
Where a scaled-down recovery environment is used to minimize day-to-day costs, pools and
databases must be scaled up to acquire full operational capacity in the recovery region
Enable new tenant provisioning in the recovery region as soon as possible
Be optimized for restoring tenants in priority order
Be optimized for getting tenants online as fast as possible by doing steps in parallel where practical
Be resilient to failure, restartable, and idempotent
Be possible to cancel the process in mid-flight if the original region comes back on-line.
Repatriation
Fail over databases from the recovery region to replicas in the original region with minimal impact to
tenants: no data loss and minimum period off-line per tenant.
In this tutorial, these challenges are addressed using features of Azure SQL Database and the Azure platform:
Azure Resource Manager templates, to reserve all needed capacity as quickly as possible. Azure Resource
Manager templates are used to provision a mirror image of the production servers and elastic pools in the
recovery region.
Geo-replication, to create asynchronously replicated read-only secondaries for all databases. During an
outage, you fail over to the replicas in the recovery region. After the outage is resolved, you fail back to the
databases in the original region with no data loss.
Asynchronous failover operations sent in tenant-priority order, to minimize failover time for large numbers
of databases.
Shard management recovery features, to change database entries in the catalog during recovery and
repatriation. These features allow the app to connect to tenant databases regardless of location without
reconfiguring the app.
SQL server DNS aliases, to enable seamless provisioning of new tenants regardless of which region the app
is operating in. DNS aliases are also used to allow the catalog sync process to connect to the active catalog
regardless of its location.

Get the disaster recovery scripts


IMPORTANT
Like all the Wingtip Tickets management scripts, the DR scripts are sample quality and are not to be used in production.

The recovery scripts used in this tutorial and Wingtip application source code are available in the Wingtip
Tickets SaaS database per tenant GitHub repository. Check out the general guidance for steps to download and
unblock the Wingtip Tickets management scripts.

Tutorial overview
In this tutorial, you first use geo-replication to create replicas of the Wingtip Tickets application and its databases
in a different region. Then, you fail over to this region to simulate recovering from an outage. When complete,
the application is fully functional in the recovery region.
Later, in a separate repatriation step, you fail over the catalog and tenant databases in the recovery region to the
original region. The application and databases stay available throughout repatriation. When complete, the
application is fully functional in the original region.

NOTE
The application is recovered into the paired region of the region in which the application is deployed. For more
information, see Azure paired regions.

Review the healthy state of the application


Before you start the recovery process, review the normal healthy state of the application.
1. In your web browser, open the Wingtip Tickets Events Hub (http://events.wingtip-dpt.
<user>.trafficmanager.net - replace <user> with your deployment's user value).
Scroll to the bottom of the page and notice the catalog server name and location in the footer. The
location is the region in which you deployed the app. TIP: Hover the mouse over the location to
enlarge the display.
2. Click on the Contoso Concert Hall tenant and open its event page.
In the footer, notice the tenant server name. The location will be the same as the catalog server's
location.
3. In the Azure portal, open the resource group in which the app is deployed
Notice the region in which the servers are deployed.

Sync tenant configuration into catalog


In this task, you start a process that syncs the configuration of the servers, elastic pools, and databases into the
tenant catalog. The process keeps this information up-to-date in the catalog. The process works with the active
catalog, whether in the original region or in the recovery region. The configuration information is used as part
of the recovery process to ensure the recovery environment is consistent with the original environment, and
then later during repatriation to ensure the original region is made consistent with any changes made in the
recovery environment. The catalog is also used to keep track of the recovery state of tenant resources

IMPORTANT
For simplicity, the sync process and other long running recovery and repatriation processes are implemented in these
tutorials as local PowerShell jobs or sessions that run under your client user login. The authentication tokens issued when
you login will expire after several hours and the jobs will then fail. In a production scenario, long-running processes should
be implemented as reliable Azure services of some kind, running under a service principal. See Use Azure PowerShell to
create a service principal with a certificate.

1. In the PowerShell ISE, open the ...\Learning Modules\UserConfig.psm1 file. Replace <resourcegroup> and
<user> on lines 10 and 11 with the value used when you deployed the app. Save the file!

2. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set:
$DemoScenario = 1 , Start a background job that syncs tenant server, and pool configuration info
into the catalog
3. Press F5 to run the sync script. A new PowerShell session is opened to sync the configuration of tenant
resources.

Leave the PowerShell window running in the background and continue with the rest of the tutorial.

NOTE
The sync process connects to the catalog via a DNS alias. This alias is modified during restore and repatriation to point to
the active catalog. The sync process keeps the catalog up-to-date with any database or pool configuration changes made
in the recovery region. During repatriation, these changes are applied to the equivalent resources in the original region.

Create secondary database replicas in the recovery region


In this task, you start a process that deploys a duplicate app instance and replicates the catalog and all tenant
databases to a recovery region.

NOTE
This tutorial adds geo-replication protection to the Wingtip Tickets sample application. In a production scenario for an
application that uses geo-replication, each tenant would be provisioned with a geo-replicated database from the outset.
See Designing highly available services using Azure SQL Database

1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following values:
$DemoScenario = 2 , Create mirror image recovery environment and replicate catalog and tenant
databases
2. Press F5 to run the script. A new PowerShell session is opened to create the replicas.

Review the normal application state


At this point, the application is running normally in the original region and now is protected by geo-replication.
Read-only secondary replicas, exist in the recovery region for all databases.
1. In the Azure portal, look at your resource groups and note that a resource group has been created with -
recovery suffix in the recovery region.
2. Explore the resources in the recovery resource group.
3. Click on the Contoso Concert Hall database on the tenants1-dpt-<user>-recovery server. Click on Geo-
Replication on the left side.

In the Azure regions map, note the geo-replication link between the primary in the original region and the
secondary in the recovery region.

Fail over the application into the recovery region


Geo -replication recovery process overview
The recovery script performs the following tasks:
1. Disables the Traffic Manager endpoint for the web app in the original region. Disabling the endpoint
prevents users from connecting to the app in an invalid state should the original region come online
during recovery.
2. Uses a force failover of the catalog database in the recovery region to make it the primary database, and
updates the activecatalog alias to point to the recovery catalog server.
3. Updates the newtenant alias to point to the tenant server in the recovery region. Changing this alias
ensures that the databases for any new tenants are provisioned in the recovery region.
4. Marks all existing tenants in the recovery catalog as offline to prevent access to tenant databases before
they are failed over.
5. Updates the configuration of all elastic pools and replicated single databases in the recovery region to
mirror their configuration in the original region. (This task is only needed if pools or replicated databases
in the recovery environment are scaled down during normal operations to reduce costs).
6. Enables the Traffic Manager endpoint for the web app in the recovery region. Enabling this endpoint
allows the application to provision new tenants. At this stage, existing tenants are still offline.
7. Submits batches of requests to force fail over databases in priority order.
Batches are organized so that databases are failed over in parallel across all pools.
Failover requests are submitted using asynchronous operations so they are submitted quickly and
multiple requests can be processed concurrently.

NOTE
In an outage scenario, the primary databases in the original region are offline. Force fail over on the secondary
breaks the connection to the primary without trying to apply any residual queued transactions. In a DR drill
scenario like this tutorial, if there is any update activity at the time of failover there could be some data loss. Later,
during repatriation, when you fail over databases in the recovery region back to the original region, a normal
failover is used to ensure there is no data loss.

8. Monitors the service to determine when databases have been failed over. Once a tenant database is failed
over, it updates the catalog to record the recovery state of the tenant database and mark the tenant as
online.
Tenant databases can be accessed by the application as soon as they're marked online in the catalog.
A sum of rowversion values in the tenant database is stored in the catalog. This value acts as a
fingerprint that allows the repatriation process to determine if the database has been updated in the
recovery region.
Run the script to fail over to the recovery region
Now imagine there is an outage in the region in which the application is deployed and run the recovery script:
1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following values:
$DemoScenario = 3 , Recover the app into a recovery region by failing over to replicas
2. Press F5 to run the script.
The script opens in a new PowerShell window and then starts a series of PowerShell jobs that run in
parallel. These jobs fail over tenant databases to the recovery region.
The recovery region is the paired region associated with the Azure region in which you deployed the
application. For more information, see Azure paired regions.
3. Monitor the status of the recovery process in the PowerShell window.
NOTE
To explore the code for the recovery jobs, review the PowerShell scripts in the ...\Learning Modules\Business Continuity
and Disaster Recovery\DR-FailoverToReplica\RecoveryJobs folder.

Review the application state during recovery


While the application endpoint is disabled in Traffic Manager, the application is unavailable. After the catalog is
failed over to the recovery region and all the tenants marked offline, the application is brought back online.
Although the application is available, each tenant appears offline in the events hub until its database is failed
over. It's important to design your application to handle offline tenant databases.
1. Promptly after the catalog database has been recovered, refresh the Wingtip Tickets Events Hub in your web
browser.
In the footer, notice that the catalog server name now has a -recovery suffix and is located in the
recovery region.
Notice that tenants that are not yet restored, are marked as offline, and are not selectable.

NOTE
With only a few databases to recover, you may not be able to refresh the browser before recovery has
completed, so you may not see the tenants while they are offline.

If you open an offline tenant's Events page directly, it displays a 'tenant offline' notification. For
example, if Contoso Concert Hall is offline, try to open http://events.wingtip-dpt.
<user>.trafficmanager.net/contosoconcerthall
Provision a new tenant in the recovery region
Even before all the existing tenant databases have failed over, you can provision new tenants in the recovery
region.
1. In the PowerShell ISE, open the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script and set the following property:
$DemoScenario = 4 , Provision a new tenant in the recovery region
2. Press F5 to run the script and provision the new tenant.
3. The Hawthorn Hall events page opens in the browser when it completes. Note from the footer that the
Hawthorn Hall database is provisioned in the recovery region.

4. In the browser, refresh the Wingtip Tickets Events Hub page to see Hawthorn Hall included.
If you provisioned Hawthorn Hall without waiting for the other tenants to restore, other tenants may
still be offline.

Review the recovered state of the application


When the recovery process completes, the application and all tenants are fully functional in the recovery region.
1. Once the display in the PowerShell console window indicates all the tenants are recovered, refresh the
Events Hub. The tenants will all appear online, including the new tenant, Hawthorn Hall.

2. In the Azure portal, open the list of resource groups.


Notice the resource group that you deployed, plus the recovery resource group, with the -recovery
suffix. The recovery resource group contains all the resources created during the recovery process,
plus new resources created during the outage.
3. Open the recovery resource group and notice the following items:
The recovery versions of the catalog and tenants1 servers, with -recovery suffix. The restored
catalog and tenant databases on these servers all have the names used in the original region.
The tenants2-dpt-<user>-recovery SQL server. This server is used for provisioning new tenants
during the outage.
The App Service named, events-wingtip-dpt-<recoveryregion>-<user&gt;, which is the recovery
instance of the Events app.
4. Open the tenants2-dpt-<user>-recovery SQL server. Notice it contains the database hawthornhall and
the elastic pool, Pool1. The hawthornhall database is configured as an elastic database in Pool1 elastic
pool.
5. Navigate back to the resource group and click on the Contoso Concert Hall database on the tenants1-dpt-
<user>-recovery server. Click on Geo-Replication on the left side.

Change tenant data


In this task, you update one of the tenant databases.
1. In your browser, find the events list for the Contoso Concert Hall and note the last event name.
2. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script, set the following value:
$DemoScenario = 5 Delete an event from a tenant in the recovery region
3. Press F5 to execute the script
4. Refresh the Contoso Concert Hall events page (http://events.wingtip-dpt.
<user>.trafficmanager.net/contosoconcerthall - substitute <user> with your deployment's user value) and
notice that the last event has been deleted.

Repatriate the application to its original production region


This task repatriates the application to its original region. In a real scenario, you would initiate repatriation when
the outage is resolved.
Repatriation process overview
The repatriation process:
1. Cancels any outstanding or in-flight database restore requests.
2. Updates the newtenant alias to point to the tenants' server in the origin region. Changing this alias ensures
that the databases for any new tenants will now be provisioned in the origin region.
3. Seeds any changed tenant data to the original region
4. Fails over tenant databases in priority order.
Failover effectively moves the database to the original region. When the database fails over, any open
connections are dropped and the database is unavailable for a few seconds. Applications should be written with
retry logic to ensure they connect again. Although this brief disconnect is often not noticed, you may choose to
repatriate databases out of business hours.
Run the repatriation script
Now let's imagine the outage is resolved and run the repatriation script.
1. In the PowerShell ISE, in the ...\Learning Modules\Business Continuity and Disaster Recovery\DR-
FailoverToReplica\Demo-FailoverToReplica.ps1 script.
2. Verify that the Catalog Sync process is still running in its PowerShell instance. If necessary, restart it by
setting:
$DemoScenario = 1 , Start synchronizing tenant server, pool, and database configuration info into
the catalog
Press F5 to run the script.
3. Then to start the repatriation process, set:
$DemoScenario = 6 , Repatriate the app into its original region
Press F5 to run the recovery script in a new PowerShell window. Repatriation will take several minutes
and can be monitored in the PowerShell window.
4. While the script is running, refresh the Events Hub page (http://events.wingtip-dpt.
<user>.trafficmanager.net)
Notice that all the tenants are online and accessible throughout this process.
5. After the repatriation is complete, refresh the Events hub and open the events page for Hawthorn Hall.
Notice that this database has been repatriated to the original region.

Designing the application to ensure app and database are colocated


The application is designed so that it always connects from an instance in the same region as the tenant
database. This design reduces latency between the application and the database. This optimization assumes the
app-to-database interaction is chattier than the user-to-app interaction.
Tenant databases may be spread across recovery and original regions for some time during repatriation. For
each database, the app looks up the region in which the database is located by doing a DNS lookup on the
tenant server name. In SQL Database, the server name is an alias. The aliased server name contains the region
name. If the application isn't in the same region as the database, it redirects to the instance in the same region as
the server. Redirecting to instance in the same region as the database minimizes latency between app and
database.

Next steps
In this tutorial you learned how to:
Sync database and elastic pool configuration info into the tenant catalog
Set up a recovery environment in an alternate region, comprising application, servers, and pools
Use geo-replication to replicate the catalog and tenant databases to the recovery region
Fail over the application and catalog and tenant databases to the recovery region
Fail back the application, catalog and tenant databases to the original region after the outage is resolved
You can learn more about the technologies Azure SQL Database provides to enable business continuity in the
Business Continuity Overview documentation.

Additional resources
Additional tutorials that build upon the Wingtip SaaS application
Deploy and explore a sharded multi-tenant
application
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you deploy and explore a sample multi-tenant SaaS application that is named Wingtip Tickets.
The Wingtip Tickets app is designed to showcase features of Azure SQL Database that simplify the
implementation of SaaS scenarios.
This implementation of the Wingtip Tickets app uses a sharded multi-tenant database pattern. The sharding is
by tenant identifier. Tenant data is distributed to a particular database according to the tenant identifier values.
This database pattern allows you to store one or more tenants in each shard or database. You can optimize for
lowest cost by having each database be shared by multiple tenants. Or you can optimize for isolation by having
each database store only one tenant. Your optimization choice can be made independently for each specific
tenant. Your choice can be made when the tenant is first stored, or you can change your mind later. The
application is designed to work well either way.

App deploys quickly


The app runs in the Azure cloud and uses Azure SQL Database. The deployment section that follows provides
the blue Deploy to Azure button. When the button is pressed, the app is fully deployed to your Azure
subscription within five minutes. You have full access to work with the individual application components.
The application is deployed with data for three sample tenants. The tenants are stored together in one multi-
tenant database.
Anyone can download the C# and PowerShell source code for Wingtip Tickets from its GitHub repository.

Learn in this tutorial


How to deploy the Wingtip Tickets SaaS application.
Where to get the application source code, and management scripts.
About the servers and databases that make up the app.
How tenants are mapped to their data with the catalog.
How to provision a new tenant.
How to monitor tenant activity in the app.
A series of related tutorials is available that build upon this initial deployment. The tutorials explore a range of
SaaS design and management patterns. When you work through the tutorials, you are encouraged to step
through the provided scripts to see how the different SaaS patterns are implemented.

Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
The latest Azure PowerShell is installed. For details, see Getting started with Azure PowerShell.

Deploy the Wingtip Tickets app


Plan the names
In the steps of this section, you provide a user value that is used to ensure resource names are globally unique,
and a name for the resource group which contains all the resources created by a deployment of the app. For a
person named Ann Finley, we suggest:
User: af1 (Their initials, plus a digit. Use a different value (e.g. af2) if you deploy the app a second time.)
Resource group: wingtip-mt-af1 (wingtip-mt indicates this is the sharded multi-tenant app. Appending the
user name af1 correlates the resource group name with the names of the resources it contains.)
Choose your names now, and write them down.
Steps
1. Click the following blue Deploy to Azure button.
It opens the Azure portal with the Wingtip Tickets SaaS deployment template.

2. Enter the required parameter values for the deployment.

IMPORTANT
For this demonstration, do not use any pre-existing resource groups, servers, or pools. Instead, choose Create a
new resource group . Delete this resource group when you are finished with the application to stop related
billing. Do not use this application, or any resources it creates, for production. Some aspects of authentication, and
the server firewall settings, are intentionally insecure in the app to facilitate the demonstration.

For Resource group - Select Create new , and then provide a Name for the resource group (case
sensitive).
Select a Location from the drop-down list.
For User - We recommend that you choose a short User value.
3. Deploy the application .
Click to agree to the terms and conditions.
Click Purchase .
4. Monitor deployment status by clicking Notifications , which is the bell icon to the right of the search box.
Deploying the Wingtip app takes approximately five minutes.

Download and unblock the management scripts


While the application is deploying, download the application source code and management scripts.
NOTE
Executable contents (scripts, DLLs) may be blocked by Windows when zip files are downloaded from an external source
and extracted. When extracting the scripts from a zip file, use the following steps to unblock the .zip file before extracting.
By unblocking the .zip file, you ensure the scripts are allowed to run.

1. Browse to the WingtipTicketsSaaS-MultiTenantDb GitHub repo.


2. Click Clone or download .
3. Click Download ZIP and save the file.
4. Right-click the WingtipTicketsSaaS-MultiTenantDb-master.zip file and select Proper ties .
5. On the General tab, select Unblock , and click Apply .
6. Click OK .
7. Extract the files.
The scripts are located in the ..\WingtipTicketsSaaS-MultiTenantDb-master\Learning Modules\ folder.

Update the configuration file for this deployment


Before running any scripts, set the resource group and user values in UserConfig.psm1 . Set these variables to
the same values you set during deployment.
1. Open ...\Learning Modules\UserConfig.psm1 in the PowerShell ISE.
2. Update ResourceGroupName and Name with the specific values for your deployment (on lines 10 and 11
only).
3. Save the changes.
The values set in this file are used by all the scripts, so it is important they are accurate. If you redeploy the app,
you must choose different values for User and Resource Group. Then update the UserConfig.psm1 file again
with the new values.

Run the application


In the Wingtip app, the tenants are venues. A venue can be concert hall, a sports club, or any other location that
hosts events. The venues register in Wingtip as customers, and a tenant identifier is generated for each venue.
Each venue lists its upcoming events in Wingtip, so the public can buy tickets to the events.
Each venue gets a personalized web app to list their events and sell tickets. Each web app is independent and
isolated from other tenants. Internally in Azure SQL Database, each the data for each tenant is stored in a
sharded multi-tenant database, by default. All data is tagged with the tenant identifier.
A central Events Hub webpage provides a list of links to the tenants in your particular deployment. Use the
following steps to experience the Events Hub webpage and an individual web app:
1. Open the Events Hub in your web browser:
http://events.wingtip-mt.<user>.trafficmanager.net (Replace <user> with your deployment's user
value.)
2. Click Fabrikam Jazz Club in the Events Hub .

Azure Traffic Manager


To control the distribution of incoming requests, the Wingtip app uses Azure Traffic Manager. The events page
for each tenant includes the tenant name in its URL. Each URL also includes your specific User value. Each URL
obeys the shown format by using the following steps:
http://events.wingtip-mt.<user>.trafficmanager.net/fabrikamjazzclub
1. The events app parses the tenant name from the URL. The tenant name is fabrikamjazzclub in the preceding
example URL.
2. The app then hashes the tenant name to create a key to access a catalog using shard map management.
3. The app finds the key in the catalog, and obtains the corresponding location of the tenant's database.
4. The app uses the location info to find and access the one database that contains all the data for the tenant.
Events Hub
1. The Events Hub lists all the tenants that are registered in the catalog, and their venues.
2. The Events Hub uses extended metadata in the catalog to retrieve the tenant's name associated with each
mapping to construct the URLs.
In a production environment, you typically create a CNAME DNS record to point a company internet domain to
the traffic manager profile.

Start generating load on the tenant databases


Now that the app is deployed, let's put it to work! The Demo-LoadGenerator PowerShell script starts a workload
running for each tenant. The real-world load on many SaaS apps is typically sporadic and unpredictable. To
simulate this type of load, the generator produces a load distributed across all tenants. The load includes
randomized bursts on each tenant occurring at randomized intervals. It takes several minutes for the load
pattern to emerge, so it's best to let the generator run for at least three or four minutes before monitoring the
load.
1. In the PowerShell ISE, open the ...\Learning Modules\Utilities\Demo-LoadGenerator.ps1 script.
2. Press F5 to run the script and start the load generator (leave the default parameter values for now).
The Demo-LoadGenerator.ps1 script opens another PowerShell session where the load generator runs. The load
generator runs in this session as a foreground task that invokes background load-generation jobs, one for each
tenant.
After the foreground task starts, it remains in a job-invoking state. The task starts additional background jobs
for any new tenants that are subsequently provisioned.
Closing the PowerShell session stops all jobs.
You might want to restart the load generator session to use different parameter values. If so, close the
PowerShell generation session, and then rerun the Demo-LoadGenerator.ps1.

Provision a new tenant into the sharded database


The initial deployment includes three sample tenants in the Tenants1 database. Let's create another tenant and
observe its effects on the deployed application. In this step, you press one key to create a new tenant:
1. Open ...\Learning Modules\Provision and Catalog\Demo-ProvisionTenants.ps1 in the PowerShell ISE.
2. Press F5 (not F8 ) to run the script (leave the default values for now).

NOTE
You must run the PowerShell scripts only by pressing the F5 key, not by pressing F8 to run a selected part of the
script. The problem with F8 is that the $PSScriptRoot variable is not evaluated. This variable is needed by many
scripts to navigate folders, invoke other scripts, or import modules.
The new Red Maple Racing tenant is added to the Tenants1 database and registered in the catalog. The new
tenant's ticket-selling Events site opens in your browser:

Refresh the Events Hub , and the new tenant now appears in the list.

Provision a new tenant in its own database


The sharded multi-tenant model allows you to choose whether to provision a new tenant into a database that
contains other tenants, or into a database of its own. A tenant isolated in its own database enjoys the following
benefits:
The performance of the tenant's database can be managed without the need to compromise with the needs
of other tenants.
If necessary, the database can be restored to an earlier point in time, because no other tenants would be
affected.
You might choose to put free-trial customers, or economy customers, into multi-tenant databases. You could put
each premium tenant into its own dedicated database. If you create lots of databases that contain only one
tenant, you can manage them all collectively in an elastic pool to optimize resource costs.
Next, we provision another tenant, this time in its own database:
1. In ...\Learning Modules\Provision and Catalog\Demo-ProvisionTenants.ps1, modify $TenantName to
Salix Salsa , $VenueType to dance and $Scenario to 2 .
2. Press F5 to run the script again.
This F5 press provisions the new tenant in a separate database. The database and the tenant are
registered in the catalog. Then the browser opens to the Events page of the tenant.
Scroll to the bottom of the page. There in the banner you see the database name in which the tenant
data is stored.
3. Refresh the Events Hub and the two new tenants now appears in the list.

Explore the servers and tenant databases


Now we look at some of the resources that were deployed:
1. In the Azure portal, browse to the list of resource groups. Open the resource group you created when you
deployed the application.
2. Click catalog-mt<user> server. The catalog server contains two databases named tenantcatalog and
basetenantdb. The basetenantdb database is an empty template database. It is copied to create a new
tenant database, whether used for many tenants or just one tenant.

3. Go back to the resource group and select the tenants1-mt server that holds the tenant databases.
The tenants1 database is a multi-tenant database in which the original three tenants, plus the first
tenant you added, are stored. It is configured as a 50 DTU Standard database.
The salixsalsa database holds the Salix Salsa dance venue as its only tenant. It is configured as a
Standard edition database with 50 DTUs by default.
Monitor the performance of the database
If the load generator has been running for several minutes, enough telemetry is available to look at the database
monitoring capabilities built into the Azure portal.
1. Browse to the tenants1-mt<user> server, and click tenants1 to view resource utilization for the
database that has four tenants in it. Each tenant is subject to a sporadic heavy load from the load
generator:

The DTU utilization chart nicely illustrates how a multi-tenant database can support an unpredictable
workload across many tenants. In this case, the load generator is applying a sporadic load of roughly 30
DTUs to each tenant. This load equates to 60% utilization of a 50 DTU database. Peaks that exceed 60%
are the result of load being applied to more than one tenant at the same time.
2. Browse to the tenants1-mt<user> server, and click the salixsalsa database. You can see the resource
utilization on this database that contains only one tenant.

The load generator is applying a similar load to each tenant, regardless of which database each tenant is in. With
only one tenant in the salixsalsa database, you can see that the database could sustain a much higher load than
the database with several tenants.
Resource allocations vary by workload
Sometimes a multi-tenant database requires more resources for good performance than does a single-tenant
database, but not always. The optimal allocation of resources depends on the particular workload characteristics
for the tenants in your system.
The workloads generated by the load generator script are for illustration purposes only.

Additional resources
To learn about multi-tenant SaaS applications, see Design patterns for multi-tenant SaaS applications.
To learn about elastic pools, see:
Elastic pools help you manage and scale multiple databases in Azure SQL Database
Scaling out with Azure SQL Database

Next steps
In this tutorial you learned:
How to deploy the Wingtip Tickets SaaS Multi-tenant Database application.
About the servers, and databases that make up the app.
Tenants are mapped to their data with the catalog.
How to provision new tenants, into a multi-tenant database and single-tenant database.
How to view pool utilization to monitor tenant activity.
How to delete sample resources to stop related billing.
Now try the Provision and catalog tutorial.
Provision and catalog new tenants in a SaaS
application using a sharded multi-tenant Azure SQL
Database
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This article covers the provisioning and cataloging of new tenants, in a multi-tenant sharded database model or
pattern.
This article has two major parts:
Conceptual discussion of the provisioning and cataloging of new tenants.
Tutorial that highlights the PowerShell script code that accomplishes the provisioning and cataloging.
The tutorial uses the Wingtip Tickets SaaS application, adapted to the multi-tenant sharded database
pattern.

Database pattern
This section, plus a few more that follow, discuss the concepts of the multi-tenant sharded database pattern.
In this multi-tenant sharded model, the table schemas inside each database include a tenant key in the primary
key of tables that store tenant data. The tenant key enables each individual database to store 0, 1, or many
tenants. The use of sharded databases makes it easy for the application system to support a very large number
of tenants. All the data for any one tenant is stored in one database. The large number of tenants are distributed
across the many sharded databases. A catalog database stores the mapping of each tenant to its database.
Isolation versus lower cost
A tenant that has a database all to itself enjoys the benefits of isolation. The tenant can have the database
restored to an earlier date without being restricted by the impact on other tenants. Database performance can
be tuned to optimize for the one tenant, again without having to compromise with other tenants. The problem is
that isolation costs more than it costs to share a database with other tenants.
When a new tenant is provisioned, it can share a database with other tenants, or it can be placed into its own
new database. Later you can change your mind and move the database to the other situation.
Databases with multiple tenants and single tenants are mixed in the same SaaS application, to optimize cost or
isolation for each tenant.
Tenant catalog pattern
When you have two or more databases that each contain at least one tenant, the application must have a way to
discover which database stores the tenant of current interest. A catalog database stores this mapping.
Tenant key
For each tenant, the Wingtip application can derive a unique key, which is the tenant key. The app extracts the
tenant name from the webpage URL. The app hashes the name to obtain the key. The app uses the key to access
the catalog. The catalog cross-references information about the database in which the tenant is stored. The app
uses the database info to connect. Other tenant key schemes can also be used.
Using a catalog allows the name or location of a tenant database to be changed after provisioning without
disrupting the application. In a multi-tenant database model, the catalog accommodates moving a tenant
between databases.
Tenant metadata beyond location
The catalog can also indicate whether a tenant is offline for maintenance or other actions. And the catalog can be
extended to store additional tenant or database metadata, such as the following items:
The service tier or edition of a database.
The version of the database schema.
The tenant name and its SLA (service level agreement).
Information to enable application management, customer support, or devops processes.
The catalog can also be used to enable cross-tenant reporting, schema management, and data extract for
analytics purposes.
Elastic Database Client Library
In Wingtip, the catalog is implemented in the tenantcatalog database. The tenantcatalog is created using the
Shard Management features of the Elastic Database Client Library (EDCL). The library enables an application to
create, manage, and use a shard map that is stored in a database. A shard map cross-references the tenant key
with its shard, meaning its sharded database.
During tenant provisioning, EDCL functions can be used from applications or PowerShell scripts to create the
entries in the shard map. Later the EDCL functions can be used to connect to the correct database. The EDCL
caches connection information to minimize the traffic on the catalog database and speed up the process of
connecting.
IMPORTANT
Do not edit the data in the catalog database through direct access! Direct updates are not supported due to the high risk
of data corruption. Instead, edit the mapping data by using EDCL APIs only.

Tenant provisioning pattern


Checklist
When you want to provision a new tenant into an existing shared database, of the shared database you must ask
the following questions:
Does it have enough space left for the new tenant?
Does it have tables with the necessary reference data for the new tenant, or can the data be added?
Does it have the appropriate variation of the base schema for the new tenant?
Is it in the appropriate geographic location close to the new tenant?
Is it at the right service tier for the new tenant?
When you want the new tenant to be isolated in its own database, you can create it to meet the specifications for
the tenant.
After the provisioning is complete, you must register the tenant in the catalog. Finally, the tenant mapping can
be added to reference the appropriate shard.
Template database
Provision the database by executing SQL scripts, deploying a bacpac, or copying a template database. The
Wingtip apps copy a template database to create new tenant databases.
Like any application, Wingtip will evolve over time. At times, Wingtip will require changes to the database.
Changes may include the following items:
New or changed schema.
New or changed reference data.
Routine database maintenance tasks to ensure optimal app performance.
With a SaaS application, these changes need to be deployed in a coordinated manner across a potentially
massive fleet of tenant databases. For these changes to be in future tenant databases, they need to be
incorporated into the provisioning process. This challenge is explored further in the schema management
tutorial.
Scripts
The tenant provisioning scripts in this tutorial support both of the following scenarios:
Provisioning a tenant into an existing database shared with other tenants.
Provisioning a tenant into its own database.
Tenant data is then initialized and registered in the catalog shard map. In the sample app, databases that contain
multiple tenants are given a generic name, such as tenants1 or tenants2. Databases that contain a single tenant
are given the tenant's name. The specific naming conventions used in the sample are not a critical part of the
pattern, as the use of a catalog allows any name to be assigned to the database.

Tutorial begins
In this tutorial, you learn how to:
Provision a tenant into a multi-tenant database
Provision a tenant into a single-tenant database
Provision a batch of tenants into both multi-tenant and single-tenant databases
Register a database and tenant mapping in a catalog
Prerequisites
To complete this tutorial, make sure the following prerequisites are completed:
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
The Wingtip Tickets SaaS Multi-tenant Database app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
Get the Wingtip scripts and source code:
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in
the WingtipTicketsSaaS-MultitenantDB GitHub repo.
See the general guidance for steps to download and unblock the Wingtip scripts.

Provision a tenant into a database shared with other tenants


In this section, you see a list of the major actions for provisioning that are taken by the PowerShell scripts. Then
you use the PowerShell ISE debugger to step through the scripts to see the actions in code.
Major actions of provisioning
The following are key elements of the provisioning workflow you step through:
Calculate the new tenant key : A hash function is used to create the tenant key from the tenant name.
Check if the tenant key already exists : The catalog is checked to ensure the key has not already been
registered.
Initialize tenant in the default tenant database : The tenant database is updated to add the new
tenant information.
Register tenant in the catalog : The mapping between the new tenant key and the existing tenants1
database is added to the catalog.
Add the tenant's name to a catalog extension table : The venue name is added to the Tenants table
in the catalog. This addition shows how the Catalog database can be extended to support additional
application-specific data.
Open Events page for the new tenant : The Bushwillow Blues events page is opened in the browser.
Debugger steps
To understand how the Wingtip app implements new tenant provisioning in a shared database, add a breakpoint
and step through the workflow:
1. In the PowerShell ISE, open ...\Learning Modules\ProvisionTenants\Demo-ProvisionTenants.ps1 and set
the following parameters:
$TenantName = Bushwillow Blues , the name of a new venue.
$VenueType = blues , one of the pre-defined venue types: blues, classicalmusic, dance, jazz, judo,
motorracing, multipurpose, opera, rockmusic, soccer (lowercase, no spaces).
$DemoScenario = 1 , to provision a tenant in a shared database with other tenants.
2. Add a breakpoint by putting your cursor anywhere on line 38, the line that says: New-Tenant `, and then
press F9 .

3. Run the script by pressing F5 .


4. After script execution stops at the breakpoint, press F11 to step into the code.
5. Trace the script's execution using the Debug menu options, F10 and F11 , to step over or into called
functions.
For more information about debugging PowerShell scripts, see Tips on working with and debugging PowerShell
scripts.

Provision a tenant in its own database


Major actions of provisioning
The following are key elements of the workflow you step through while tracing the script:
Calculate the new tenant key : A hash function is used to create the tenant key from the tenant name.
Check if the tenant key already exists : The catalog is checked to ensure the key has not already been
registered.
Create a new tenant database : The database is created by copying the basetenantdb database using a
Resource Manager template. The new database name is based on the tenant's name.
Add database to catalog : The new tenant database is registered as a shard in the catalog.
Initialize tenant in the default tenant database : The tenant database is updated to add the new
tenant information.
Register tenant in the catalog : The mapping between the new tenant key and the sequoiasoccer
database is added to the catalog.
Tenant name is added to the catalog : The venue name is added to the Tenants extension table in the
catalog.
Open Events page for the new tenant : The Sequoia Soccer Events page is opened in the browser.
Debugger steps
Now walk through the script process when creating a tenant in its own database:
1. Still in ...\Learning Modules\ProvisionTenants\Demo-ProvisionTenants.ps1 set the following parameters:
$TenantName = Sequoia Soccer , the name of a new venue.
$VenueType = soccer , one of the pre-defined venue types: blues, classicalmusic, dance, jazz, judo,
motorracing, multipurpose, opera, rockmusic, soccer (lower case, no spaces).
$DemoScenario = 2 , to provision a tenant into its own database.
2. Add a new breakpoint by putting your cursor anywhere on line 57, the line that says:
& $PSScriptRoot\New-TenantAndDatabase `, and press F9 .

3. Run the script by pressing F5 .


4. After the script execution stops at the breakpoint, press F11 to step into the code. Use F10 and F11 to
step over and step into functions to trace the execution.

Provision a batch of tenants


This exercise provisions a batch of 17 tenants. It’s recommended you provision this batch of tenants before
starting other Wingtip Tickets tutorials so there are more databases to work with.
1. In the PowerShell ISE, open ...\Learning Modules\ProvisionTenants\Demo-ProvisionTenants.ps1 and
change the $DemoScenario parameter to 4:
$DemoScenario = 4 , to provision a batch of tenants into a shared database.
2. Press F5 and run the script.
Verify the deployed set of tenants
At this stage, you have a mix of tenants deployed into a shared database and tenants deployed into their own
databases. The Azure portal can be used to inspect the databases created. In the Azure portal, open the
tenants1-mt-<USER> server by browsing to the list of SQL servers. The SQL databases list should include
the shared tenants1 database and the databases for the tenants that are in their own database:

While the Azure portal shows the tenant databases, it doesn't let you see the tenants inside the shared database.
The full list of tenants can be seen in the Events Hub webpage of Wingtip, and by browsing the catalog.
Using Wingtip Tickets events hub page
Open the Events Hub page in the browser (http:events.wingtip-mt.<USER>.trafficmanager.net)
Using catalog database
The full list of tenants and the corresponding database for each is available in the catalog. A SQL view is
provided that joins the tenant name to the database name. The view nicely demonstrates the value of extending
the metadata that is stored in the catalog.
The SQL view is available in the tenantcatalog database.
The tenant name is stored in the Tenants table.
The database name is stored in the Shard Management tables.
1. In SQL Server Management Studio (SSMS), connect to the tenants server at catalog-mt.
<USER>.database.windows.net , with Login = developer , and Password = P@ssword1
2. In the SSMS Object Explorer, browse to the views in the tenantcatalog database.
3. Right click on the view TenantsExtended and choose Select Top 1000 Rows . Note the mapping between
tenant name and database for the different tenants.

Other provisioning patterns


This section discusses other interesting provisioning patterns.
Pre-provisioning databases in elastic pools
The pre-provisioning pattern exploits the fact that when using elastic pools, billing is for the pool not the
databases. Thus databases can be added to an elastic pool before they are needed without incurring extra cost.
This pre-visioning significantly reduces the time taken to provision a tenant into a database. The number of
databases pre-provisioned can be adjusted as needed to keep a buffer suitable for the anticipated provisioning
rate.
Auto-provisioning
In the auto-provisioning pattern, a dedicated provisioning service is used to provision servers, pools, and
databases automatically as needed. This automation includes the pre-provisioning of databases in elastic pools.
And if databases are decommissioned and deleted, the gaps this creates in elastic pools can be filled by the
provisioning service as desired.
This type of automated service could be simple or complex. For example, the automation could handle
provisioning across multiple geographies, and could set up geo-replication for disaster recovery. With the auto-
provisioning pattern, a client application or script would submit a provisioning request to a queue to be
processed by a provisioning service. The script would then poll to detect completion. If pre-provisioning is used,
requests would be handled quickly, while a background service would manage the provisioning of a
replacement database.

Additional resources
Elastic database client library
How to Debug Scripts in Windows PowerShell ISE

Next steps
In this tutorial you learned how to:
Provision a single new tenant into a shared multi-tenant database and its own database
Provision a batch of additional tenants
Step through the details of provisioning tenants, and registering them into the catalog
Try the Performance monitoring tutorial.
Monitor and manage performance of sharded
multi-tenant Azure SQL Database in a multi-tenant
SaaS app
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, several key performance management scenarios used in SaaS applications are explored. Using a
load generator to simulate activity across sharded multi-tenant databases, the built-in monitoring and alerting
features of Azure SQL Database are demonstrated.
The Wingtip Tickets SaaS Multi-tenant Database app uses a sharded multi-tenant data model, where venue
(tenant) data is distributed by tenant ID across potentially multiple databases. Like many SaaS applications, the
anticipated tenant workload pattern is unpredictable and sporadic. In other words, ticket sales may occur at any
time. To take advantage of this typical database usage pattern, databases can be scaled up and down to optimize
the cost of a solution. With this type of pattern, it's important to monitor database resource usage to ensure that
loads are reasonably balanced across potentially multiple databases. You also need to ensure that individual
databases have adequate resources and are not hitting their DTU limits. This tutorial explores ways to monitor
and manage databases, and how to take corrective action in response to variations in workload.
In this tutorial you learn how to:
Simulate usage on a sharded multi-tenant database by running a provided load generator
Monitor the database as it responds to the increase in load
Scale up the database in response to the increased database load
Provision a tenant into a single-tenant database
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS Multi-tenant Database app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell

Introduction to SaaS performance management patterns


Managing database performance consists of compiling and analyzing performance data, and then reacting to
this data by adjusting parameters to maintain an acceptable response time for your application.
Performance management strategies
To avoid having to manually monitor performance, it’s most effective to set aler ts that trigger when
databases stray out of normal ranges .
To respond to short-term fluctuations in the compute size of a database, the DTU level can be scaled up
or down . If this fluctuation occurs on a regular or predictable basis, scaling the database can be
scheduled to occur automatically . For example, scale down when you know your workload is light,
maybe overnight, or during weekends.
To respond to longer-term fluctuations, or changes in the tenants, individual tenants can be moved into
other database .
To respond to short-term increases in individual tenant load, individual tenants can be taken out of a
database and assigned an individual compute size . Once the load is reduced, the tenant can then be
returned to the multi-tenant database. When this is known in advance, tenants can be moved preemptively to
ensure the database always has the resources it needs, and to avoid impact on other tenants in the multi-
tenant database. If this requirement is predictable, such as a venue experiencing a rush of ticket sales for a
popular event, then this management behavior can be integrated into the application.
The Azure portal provides built-in monitoring and alerting on most resources. For SQL Database, monitoring
and alerting is available on databases. This built-in monitoring and alerting is resource-specific, so it's
convenient to use for small numbers of resources, but is not convenient when working with many resources.
For high-volume scenarios, where you're working with many resources, Azure Monitor logs can be used. This is
a separate Azure service that provides analytics over emitted logs gathered in a Log Analytics workspace. Azure
Monitor logs can collect telemetry from many services and be used to query and set alerts.

Get the Wingtip Tickets SaaS Multi-tenant Database application


source code and scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-MultitenantDB GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.

Provision additional tenants


For a good understanding of how performance monitoring and management works at scale, this tutorial
requires you to have multiple tenants in a sharded multi-tenant database.
If you have already provisioned a batch of tenants in a prior tutorial, skip to the Simulate usage on all tenant
databases section.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 1 , Provision a batch of tenants
3. Press F5 to run the script.
The script deploys 17 tenants into the multi-tenant database in a few minutes.
The New-TenantBatch script creates new tenants with unique tenant keys within the sharded multi-tenant
database and initializes them with the tenant name and venue type. This is consistent with the way the app
provisions a new tenant.

Simulate usage on all tenant databases


The Demo-PerformanceMonitoringAndManagement.ps1 script is provided that simulates a workload running
against the multi-tenant database. The load is generated using one of the available load scenarios:

DEM O SC EN A RIO

2 Generate normal intensity load (approximately 30 DTU)

3 Generate load with longer bursts per tenant

4 Generate load with higher DTU bursts per tenant


(approximately 70 DTU)
DEM O SC EN A RIO

5 Generate a high intensity (approximately 90 DTU) on a single


tenant plus a normal intensity load on all other tenants

The load generator applies a synthetic CPU-only load to every tenant database. The generator starts a job for
each tenant database, which calls a stored procedure periodically that generates the load. The load levels (in
DTUs), duration, and intervals are varied across all databases, simulating unpredictable tenant activity.
1. In the PowerShell ISE , open …\Learning Modules\Performance Monitoring and Management\Demo-
PerformanceMonitoringAndManagement.ps1. Keep this script open as you'll run several scenarios during
this tutorial.
2. Set $DemoScenario = 2 , Generate normal intensity load
3. Press F5 to apply a load to all your tenants.
Wingtip Tickets SaaS Multi-tenant Database is a SaaS app, and the real-world load on a SaaS app is typically
sporadic and unpredictable. To simulate this, the load generator produces a randomized load distributed across
all tenants. Several minutes are needed for the load pattern to emerge, so run the load generator for 3-5
minutes before attempting to monitor the load in the following sections.

IMPORTANT
The load generator is running as a series of jobs in a new PowerShell window. If you close the session, the load generator
stops. The load generator remains in a job-invoking state where it generates load on any new tenants that are
provisioned after the generator is started. Use Ctrl-C to stop invoking new jobs and exit the script. The load generator will
continue to run, but only on existing tenants.

Monitor resource usage using the Azure portal


To monitor the resource usage that results from the load being applied, open the portal to the multi-tenant
database, tenants1 , containing the tenants:
1. Open the Azure portal and browse to the server tenants1-mt-<USER>.
2. Scroll down and locate databases and click tenants1 . This sharded multi-tenant database contains all the
tenants created so far.
Observe the DTU chart.

Set performance alerts on the database


Set an alert on the database that triggers on >75% utilization as follows:
1. Open the tenants1 database (on the tenants1-mt-<USER> server) in the Azure portal.
2. Click Aler t Rules , and then click + Add aler t :
3. Provide a name, such as High DTU ,
4. Set the following values:
Metric = DTU percentage
Condition = greater than
Threshold = 75 .
Period = Over the last 30 minutes
5. Add an email address to the Additional administrator email(s) box and click OK .
Scale up a busy database
If the load level increases on a database to the point that it maxes out the database and reaches 100% DTU
usage, then database performance is affected, potentially slowing query response times.
Shor t term , consider scaling up the database to provide additional resources, or removing tenants from the
multi-tenant database (moving them out of the multi-tenant database to a stand-alone database).
Longer term , consider optimizing queries or index usage to improve database performance. Depending on the
application's sensitivity to performance issues its best practice to scale a database up before it reaches 100%
DTU usage. Use an alert to warn you in advance.
You can simulate a busy database by increasing the load produced by the generator. Causing the tenants to
burst more frequently, and for longer, increasing the load on the multi-tenant database without changing the
requirements of the individual tenants. Scaling up the database is easily done in the portal or from PowerShell.
This exercise uses the portal.
1. Set $DemoScenario = 3 , Generate load with longer and more frequent bursts per database to increase the
intensity of the aggregate load on the database without changing the peak load required by each tenant.
2. Press F5 to apply a load to all your tenant databases.
3. Go to the tenants1 database in the Azure portal.
Monitor the increased database DTU usage on the upper chart. It takes a few minutes for the new higher load to
kick in, but you should quickly see the database start to hit max utilization, and as the load steadies into the new
pattern, it rapidly overloads the database.
1. To scale up the database, click Pricing tier (scale DTUs) in the settings blade.
2. Adjust the DTU setting to 100 .
3. Click Apply to submit the request to scale the database.
Go back to tenants1 > Over view to view the monitoring charts. Monitor the effect of providing the database
with more resources (although with few tenants and a randomized load it’s not always easy to see conclusively
until you run for some time). While you are looking at the charts bear in mind that 100% on the upper chart
now represents 100 DTUs, while on the lower chart 100% is still 50 DTUs.
Databases remain online and fully available throughout the process. Application code should always be written
to retry dropped connections, and so will reconnect to the database.

Provision a new tenant in its own database


The sharded multi-tenant model allows you to choose whether to provision a new tenant in a multi-tenant
database alongside other tenants, or to provision the tenant in a database of its own. By provisioning a tenant in
its own database, it benefits from the isolation inherent in the separate database, allowing you to manage the
performance of that tenant independently of others, restore that tenant independently of others, etc. For
example, you might choose to put free-trial or regular customers in a multi-tenant database, and premium
customers in individual databases. If isolated single-tenant databases are created, they can still be managed
collectively in an elastic pool to optimize resource costs.
If you already provisioned a new tenant in its own database, skip the next few steps.
1. In the PowerShell ISE , open …\Learning Modules\ProvisionTenants\Demo-ProvisionTenants.ps1.
2. Modify $TenantName = "Salix Salsa" and $VenueType = "dance"
3. Set $Scenario = 2 , Provision a tenant in a new single-tenant database
4. Press F5 to run the script.
The script will provision this tenant in a separate database, register the database and the tenant with the catalog,
and then open the tenant’s Events page in the browser. Refresh the Events Hub page and you will see "Salix
Salsa" has been added as a venue.

Manage performance of an individual database


If a single tenant within a multi-tenant database experiences a sustained high load, it may tend to dominate the
database resources and impact other tenants in the same database. If the activity is likely to continue for some
time, the tenant can be temporarily moved out of the database and into its own single-tenant database. This
allows the tenant to have the extra resources it needs, and fully isolates it from the other tenants.
This exercise simulates the effect of Salix Salsa experiencing a high load when tickets go on sale for a popular
event.
1. Open the …\Demo-PerformanceMonitoringAndManagement.ps1 script.
2. Set $DemoScenario = 5 , Generate a normal load plus a high load on a single tenant (approximately 90
DTU).
3. Set $SingleTenantName = Salix Salsa
4. Execute the script using F5 .
Go to portal and navigate to salixsalsa > Over view to view the monitoring charts.

Other performance management patterns


Tenant self-ser vice scaling
Because scaling is a task easily called via the management API, you can easily build the ability to scale tenant
databases into your tenant-facing application, and offer it as a feature of your SaaS service. For example, let
tenants self-administer scaling up and down, perhaps linked directly to their billing!
Scaling a database up and down on a schedule to match usage patterns
Where aggregate tenant usage follows predictable usage patterns, you can use Azure Automation to scale a
database up and down on a schedule. For example, scale a database down after 6pm and up again before 6am
on weekdays when you know there is a drop in resource requirements.

Next steps
In this tutorial you learn how to:
Simulate usage on a sharded multi-tenant database by running a provided load generator
Monitor the database as it responds to the increase in load
Scale up the database in response to the increased database load
Provision a tenant into a single-tenant database

Additional resources
Azure automation
Run ad hoc analytics queries across multiple
databases (Azure SQL Database)
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you run distributed queries across the entire set of tenant databases to enable ad hoc interactive
reporting. These queries can extract insights buried in the day-to-day operational data of the Wingtip Tickets
SaaS app. To do these extractions, you deploy an additional analytics database to the catalog server and use
Elastic Query to enable distributed queries.
In this tutorial you learn:
How to deploy an ad hoc reporting database
How to run distributed queries across all tenant databases
To complete this tutorial, make sure the following prerequisites are completed:
The Wingtip Tickets SaaS Multi-tenant Database app is deployed. To deploy in less than five minutes, see
Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
Azure PowerShell is installed. For details, see Getting started with Azure PowerShell
SQL Server Management Studio (SSMS) is installed. To download and install SSMS, see Download SQL
Server Management Studio (SSMS).

Ad hoc reporting pattern

SaaS applications can analyze the vast amount of tenant data that is stored centrally in the cloud. The analyses
reveal insights into the operation and usage of your application. These insights can guide feature development,
usability improvements, and other investments in your apps and services.
Accessing this data in a single multi-tenant database is easy, but not so easy when distributed at scale across
potentially thousands of databases. One approach is to use Elastic Query, which enables querying across a
distributed set of databases with common schema. These databases can be distributed across different resource
groups and subscriptions. Yet one common login must have access to extract data from all the databases. Elastic
Query uses a single head database in which external tables are defined that mirror tables or views in the
distributed (tenant) databases. Queries submitted to this head database are compiled to produce a distributed
query plan, with portions of the query pushed down to the tenant databases as needed. Elastic Query uses the
shard map in the catalog database to determine the location of all tenant databases. Setup and query are
straightforward using standard Transact-SQL, and support ad hoc querying from tools like Power BI and Excel.
By distributing queries across the tenant databases, Elastic Query provides immediate insight into live
production data. However, as Elastic Query pulls data from potentially many databases, query latency can
sometimes be higher than for equivalent queries submitted to a single multi-tenant database. Be sure to design
queries to minimize the data that is returned. Elastic Query is often best suited for querying small amounts of
real-time data, as opposed to building frequently used or complex analytics queries or reports. If queries do not
perform well, look at the execution plan to see what part of the query has been pushed down to the remote
database. And assess how much data is being returned. Queries that require complex analytical processing
might be better served by saving the extracted tenant data into a database that is optimized for analytics
queries. SQL Database and Azure Synapse Analytics could host such the analytics database.
This pattern for analytics is explained in the tenant analytics tutorial.

Get the Wingtip Tickets SaaS Multi-tenant Database application


source code and scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-MultitenantDB GitHub repo. Check out the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.

Create ticket sales data


To run queries against a more interesting data set, create ticket sales data by running the ticket-generator.
1. In the PowerShell ISE, open the ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-
AdhocReporting.ps1 script and set the following values:
$DemoScenario = 1, Purchase tickets for events at all venues .
2. Press F5 to run the script and generate ticket sales. While the script is running, continue the steps in this
tutorial. The ticket data is queried in the Run ad hoc distributed queries section, so wait for the ticket
generator to complete.

Explore the tenant tables


In the Wingtip Tickets SaaS Multi-tenant Database application, tenants are stored in a hybrid tenant
management model - where tenant data is either stored in a multi-tenant database or a single tenant database
and can be moved between the two. When querying across all tenant databases, it's important that Elastic
Query can treat the data as if it is part of a single logical database sharded by tenant.
To achieve this pattern, all tenant tables include a VenueId column that identifies which tenant the data belongs
to. The VenueId is computed as a hash of the Venue name, but any approach could be used to introduce a
unique value for this column. This approach is similar to the way the tenant key is computed for use in the
catalog. Tables containing VenueId are used by Elastic Query to parallelize queries and push them down to the
appropriate remote tenant database. This dramatically reduces the amount of data that is returned and results in
an increase in performance especially when there are multiple tenants whose data is stored in single tenant
databases.

Deploy the database used for ad hoc distributed queries


This exercise deploys the adhocreporting database. This is the head database that contains the schema used for
querying across all tenant databases. The database is deployed to the existing catalog server, which is the server
used for all management-related databases in the sample app.
1. Open ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-AdhocReporting.ps1 in the
PowerShell ISE and set the following values:
$DemoScenario = 2, Deploy Ad hoc analytics database .
2. Press F5 to run the script and create the adhocreporting database.
In the next section, you add schema to the database so it can be used to run distributed queries.

Configure the 'head' database for running distributed queries


This exercise adds schema (the external data source and external table definitions) to the ad hoc reporting
database that enables querying across all tenant databases.
1. Open SQL Server Management Studio, and connect to the Adhoc reporting database you created in the
previous step. The name of the database is adhocreporting.
2. Open ...\Learning Modules\Operational Analytics\Adhoc Reporting\ Initialize-AdhocReportingDB.sql in
SSMS.
3. Review the SQL script and note the following:
Elastic Query uses a database-scoped credential to access each of the tenant databases. This credential
needs to be available in all the databases and should normally be granted the minimum rights required
to enable these ad hoc queries.

By using the catalog database as the external data source, queries are distributed to all databases
registered in the catalog when the query is run. Because server names are different for each deployment,
this initialization script gets the location of the catalog database by retrieving the current server
(@@servername) where the script is executed.

The external tables that reference tenant tables are defined with DISTRIBUTION =
SHARDED(VenueId) . This routes a query for a particular VenueId to the appropriate database and
improves performance for many scenarios as shown in the next section.
The local table VenueTypes that is created and populated. This reference data table is common in all
tenant databases, so it can be represented here as a local table and populated with the common data. For
some queries, this may reduce the amount of data moved between the tenant databases and the
adhocreporting database.

If you include reference tables in this manner, be sure to update the table schema and data whenever you
update the tenant databases.
4. Press F5 to run the script and initialize the adhocreporting database.
Now you can run distributed queries, and gather insights across all tenants!

Run ad hoc distributed queries


Now that the adhocreporting database is set up, go ahead and run some distributed queries. Include the
execution plan for a better understanding of where the query processing is happening.
When inspecting the execution plan, hover over the plan icons for details.
1. In SSMS , open ...\Learning Modules\Operational Analytics\Adhoc Reporting\Demo-
AdhocReportingQueries.sql.
2. Ensure you are connected to the adhocrepor ting database.
3. Select the Quer y menu and click Include Actual Execution Plan
4. Highlight the Which venues are currently registered? query, and press F5 .
The query returns the entire venue list, illustrating how quick and easy it is to query across all tenants
and return data from each tenant.
Inspect the plan and see that the entire cost is the remote query because we're simply going to each
tenant database and selecting the venue information.
5. Select the next query, and press F5 .
This query joins data from the tenant databases and the local VenueTypes table (local, as it's a table in the
adhocreporting database).
Inspect the plan and see that the majority of cost is the remote query because we query each tenant's
venue info (dbo.Venues), and then do a quick local join with the local VenueTypes table to display the
friendly name.

6. Now select the On which day were the most tickets sold? query, and press F5 .
This query does a bit more complex joining and aggregation. What's important to note is that most of the
processing is done remotely, and once again, we bring back only the rows we need, returning just a single
row for each venue's aggregate ticket sale count per day.
Next steps
In this tutorial you learned how to:
Run distributed queries across all tenant databases
Deploy an ad hoc reporting database and add schema to it to run distributed queries.
Now try the Tenant Analytics tutorial to explore extracting data to a separate analytics database for more
complex analytics processing.

Additional resources
Elastic Query
Manage schema in a SaaS application that uses
sharded multi-tenant databases
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This tutorial examines the challenges in maintaining a fleet of databases in a Software as a Service (SaaS)
application. Solutions are demonstrated for fanning out schema changes across the fleet of databases.
Like any application, the Wingtip Tickets SaaS app will evolve over time, and will require changes to the
database. Changes may impact schema or reference data, or apply database maintenance tasks. With a SaaS
application using a database per tenant pattern, changes must be coordinated across a potentially massive fleet
of tenant databases. In addition, you must incorporate these changes into the database provisioning process to
ensure they are included in new databases as they are created.
Two scenarios
This tutorial explores the following two scenarios:
Deploy reference data updates for all tenants.
Rebuild an index on the table that contains the reference data.
The Elastic Jobs feature of Azure SQL Database is used to execute these operations across tenant databases. The
jobs also operate on the 'template' tenant database. In the Wingtip Tickets sample app, this template database is
copied to provision a new tenant database.
In this tutorial you learn how to:
Create a job agent.
Execute a T-SQL query on multiple tenant databases.
Update reference data in all tenant databases.
Create an index on a table in all tenant databases.

Prerequisites
The Wingtip Tickets multi-tenant database app must already be deployed:
For instructions, see the first tutorial, which introduces the Wingtip Tickets SaaS multi-tenant database
app:
Deploy and explore a sharded multi-tenant application that uses Azure SQL Database.
The deploy process runs for less than five minutes.
You must have the sharded multi-tenant version of Wingtip installed. The versions for Standalone and
Database per tenant do not support this tutorial.
The latest version of SQL Server Management Studio (SSMS) must be installed. Download and Install
SSMS.
Azure PowerShell must be installed. For details, see Getting started with Azure PowerShell.
NOTE
This tutorial uses features of the Azure SQL Database service that are in a limited preview (Elastic Database jobs). If you
wish to do this tutorial, provide your subscription ID to SaaSFeedback@microsoft.com with subject=Elastic Jobs Preview.
After you receive confirmation that your subscription has been enabled, download and install the latest pre-release jobs
cmdlets. This preview is limited, so contact SaaSFeedback@microsoft.com for related questions or support.

Introduction to SaaS schema management patterns


The sharded multi-tenant database model used in this sample enables a tenants database to contain one or
more tenants. This sample explores the potential to use a mix of a many-tenant and one-tenant databases,
enabling a hybrid tenant management model. Managing changes to these databases can be complicated. Elastic
Jobs facilitates administration and management of large numbers of database. Jobs enable you to securely and
reliably run Transact-SQL scripts as tasks, against a group of tenant databases. The tasks are independent of user
interaction or input. This method can be used to deploy changes to schema or to common reference data, across
all tenants in an application. Elastic Jobs can also be used to maintain a golden template copy of the database.
The template is used to create new tenants, always ensuring the latest schema and reference data are in use.

Elastic Jobs limited preview


There is a new version of Elastic Jobs that is now an integrated feature of Azure SQL Database. This new version
of Elastic Jobs is currently in limited preview. The limited preview currently supports using PowerShell to create
a job agent, and T-SQL to create and manage jobs.

NOTE
This tutorial uses features of the SQL Database service that are in a limited preview (Elastic Database jobs). If you wish to
do this tutorial, provide your subscription ID to SaaSFeedback@microsoft.com with subject=Elastic Jobs Preview. After you
receive confirmation that your subscription has been enabled, download and install the latest pre-release jobs cmdlets.
This preview is limited, so contact SaaSFeedback@microsoft.com for related questions or support.

Get the Wingtip Tickets SaaS Multi-tenant Database application


source code and scripts
The Wingtip Tickets SaaS Multi-tenant Database scripts and application source code are available in the
WingtipTicketsSaaS-MultitenantDB repository on GitHub. See the general guidance for steps to download and
unblock the Wingtip Tickets SaaS scripts.

Create a job agent database and new job agent


This tutorial requires that you use PowerShell to create the job agent database and job agent. Like the MSDB
database used by SQL Agent, a job agent uses a database in Azure SQL Database to store job definitions, job
status, and history. After the job agent is created, you can create and monitor jobs immediately.
1. In PowerShell ISE , open ...\Learning Modules\Schema Management\Demo-SchemaManagement.ps1.
2. Press F5 to run the script.
The Demo-SchemaManagement.ps1 script calls the Deploy-SchemaManagement.ps1 script to create a database
named jobagent on the catalog server. The script then creates the job agent, passing the jobagent database as a
parameter.

Create a job to deploy new reference data to all tenants


Prepare
Each tenant's database includes a set of venue types in the VenueTypes table. Each venue type defines the kind
of events that can be hosted at a venue. These venue types correspond to the background images you see in the
tenant events app. In this exercise, you deploy an update to all databases to add two additional venue types:
Motorcycle Racing and Swimming Club.
First, review the venue types included in each tenant database. Connect to one of the tenant databases in SQL
Server Management Studio (SSMS) and inspect the VenueTypes table. You can also query this table in the Query
editor in the Azure portal, accessed from the database page.
1. Open SSMS and connect to the tenant server: tenants1-dpt-<user>.database.windows.net
2. To confirm that Motorcycle Racing and Swimming Club are not currently included, browse to the
contosoconcerthall database on the tenants1-dpt-<user> server and query the VenueTypes table.
Steps
Now you create a job to update the VenueTypes table in each tenants database, by adding the two new venue
types.
To create a new job, you use the set of jobs system stored procedures that were created in the jobagent
database. The stored procedures were created when the job agent was created.
1. In SSMS, connect to the tenant server: tenants1-mt-<user>.database.windows.net
2. Browse to the tenants1 database.
3. Query the VenueTypes table to confirm that Motorcycle Racing and Swimming Club are not yet in the
results list.
4. Connect to the catalog server, which is catalog-mt-<user>.database.windows.net.
5. Connect to the jobagent database in the catalog server.
6. In SSMS, open the file ...\Learning Modules\Schema Management\DeployReferenceData.sql.
7. Modify the statement: set @User = <user> and substitute the User value used when you deployed the
Wingtip Tickets SaaS Multi-tenant Database application.
8. Press F5 to run the script.
Observe
Observe the following items in the DeployReferenceData.sql script:
sp_add_target_group creates the target group name DemoServerGroup, and adds target members to
the group.
sp_add_target_group_member adds the following items:
A server target member type.
This is the tenants1-mt-<user> server that contains the tenants databases.
Including the server includes the tenant databases that exist at the time the job executes.
A database target member type for the template database (basetenantdb) that resides on catalog-mt-
<user> server,
A database target member type to include the adhocreporting database that is used in a later tutorial.
sp_add_job creates a job called Reference Data Deployment.
sp_add_jobstep creates the job step containing T-SQL command text to update the reference table,
VenueTypes.
The remaining views in the script display the existence of the objects and monitor job execution. Use
these queries to review the status value in the lifecycle column to determine when the job has finished.
The job updates the tenants database, and updates the two additional databases that contain the
reference table.
In SSMS, browse to the tenant database on the tenants1-mt-<user> server. Query the VenueTypes table to
confirm that Motorcycle Racing and Swimming Club are now added to the table. The total count of venue types
should have increased by two.

Create a job to manage the reference table index


This exercise creates a job to rebuild the index on the reference table primary key on all the tenant databases. An
index rebuild is a typical database management operation that an administrator might run after loading a large
amount of data load, to improve performance.
1. In SSMS, connect to jobagent database in catalog-mt-<User>.database.windows.net server.
2. In SSMS, open ...\Learning Modules\Schema Management\OnlineReindex.sql.
3. Press F5 to run the script.
Observe
Observe the following items in the OnlineReindex.sql script:
sp_add_job creates a new job called Online Reindex PK__VenueTyp__265E44FD7FD4C885.
sp_add_jobstep creates the job step containing T-SQL command text to update the index.
The remaining views in the script monitor job execution. Use these queries to review the status value in
the lifecycle column to determine when the job has successfully finished on all target group members.

Additional resources
Managing scaled-out cloud databases

Next steps
In this tutorial you learned how to:
Create a job agent to run T-SQL jobs across multiple databases
Update reference data in all tenant databases
Create an index on a table in all tenant databases
Next, try the Ad hoc reporting tutorial to explore running distributed queries across tenant databases.
Cross-tenant analytics using extracted data - multi-
tenant app
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


In this tutorial, you walk through a complete analytics scenario for a multitenant implementation. The scenario
demonstrates how analytics can enable businesses to make smart decisions. Using data extracted from sharded
database, you use analytics to gain insights into tenant behavior, including their use of the sample Wingtip
Tickets SaaS application. This scenario involves three steps:
1. Extract data from each tenant database into an analytics store.
2. Optimize the extracted data for analytics processing.
3. Use Business Intelligence tools to draw out useful insights, which can guide decision making.
In this tutorial you learn how to:
Create the tenant analytics store to extract the data into.
Use elastic jobs to extract data from each tenant database into the analytics store.
Optimize the extracted data (reorganize into a star-schema).
Query the analytics database.
Use Power BI for data visualization to highlight trends in tenant data and make recommendation for
improvements.

Offline tenant analytics pattern


SaaS applications you develop have access to a vast amount of tenant data stored in the cloud. The data
provides a rich source of insights about the operation and usage of your application, and about the behavior of
the tenants. These insights can guide feature development, usability improvements, and other investments in
the app and platform.
Accessing the data for all tenants is simple when all the data is in just one multi-tenant database. But the access
is more complex when distributed at scale across thousands of databases. One way to tame the complexity is to
extract the data to an analytics database or a data warehouse. You then query the data warehouse to gather
insights from the tickets data of all tenants.
This tutorial presents a complete analytics scenario for this sample SaaS application. First, elastic jobs are used
to schedule the extraction of data from each tenant database. The data is sent to an analytics store. The analytics
store could either be an SQL Database or a Azure Synapse Analytics. For large-scale data extraction, Azure Data
Factory is commended.
Next, the aggregated data is shredded into a set of star-schema tables. The tables consist of a central fact table
plus related dimension tables:
The central fact table in the star-schema contains ticket data.
The dimension tables contain data about venues, events, customers, and purchase dates.
Together the central and dimension tables enable efficient analytical processing. The star-schema used in this
tutorial is displayed in the following image:

Finally, the star-schema tables are queried. The query results are displayed visually to highlight insights into
tenant behavior and their use of the application. With this star-schema, you can run queries that help discover
items like the following:
Who is buying tickets and from which venue.
Hidden patterns and trends in the following areas:
The sales of tickets.
The relative popularity of each venue.
Understanding how consistently each tenant is using the service provides an opportunity to create service plans
to cater to their needs. This tutorial provides basic examples of insights that can be gleaned from tenant data.

Setup
Prerequisites
To complete this tutorial, make sure the following prerequisites are met:
The Wingtip Tickets SaaS Multi-tenant Database application is deployed. To deploy in less than five minutes,
see Deploy and explore the Wingtip Tickets SaaS Multi-tenant Database application
The Wingtip SaaS scripts and application source code are downloaded from GitHub. Be sure to unblock the
zip file before extracting its contents. Check out the general guidance for steps to download and unblock the
Wingtip Tickets SaaS scripts.
Power BI Desktop is installed. Download Power BI Desktop
The batch of additional tenants has been provisioned, see the Provision tenants tutorial .
A job agent and job agent database have been created. See the appropriate steps in the Schema
management tutorial .
Create data for the demo
In this tutorial, analysis is performed on ticket sales data. In the current step, you generate ticket data for all the
tenants. Later this data is extracted for analysis. Ensure you have provisioned the batch of tenants as described
earlier, so that you have a meaningful amount of data. A sufficiently large amount of data can expose a range of
different ticket purchasing patterns.
1. In PowerShell ISE , open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1, and set the following value:
$DemoScenario = 1 Purchase tickets for events at all venues
2. Press F5 to run the script and create ticket purchasing history for every event in each venue. The script runs
for several minutes to generate tens of thousands of tickets.
Deploy the analytics store
Often there are numerous transactional sharded databases that together hold all tenant data. You must
aggregate the tenant data from the sharded database into one analytics store. The aggregation enables efficient
query of the data. In this tutorial, an Azure SQL Database database is used to store the aggregated data.
In the following steps, you deploy the analytics store, which is called tenantanalytics . You also deploy
predefined tables that are populated later in the tutorial:
1. In PowerShell ISE, open …\Learning Modules\Operational Analytics\Tenant Analytics\Demo-
TenantAnalytics.ps1
2. Set the $DemoScenario variable in the script to match your choice of analytics store. For learning purposes,
using the database without columnstore is recommended.
To use SQL Database without columnstore, set $DemoScenario = 2
To use SQL Database with columnstore, set $DemoScenario = 3
3. Press F5 to run the demo script (that calls the Deploy-TenantAnalytics<XX>.ps1 script) which creates the
tenant analytics store.
Now that you have deployed the application and filled it with interesting tenant data, use SQL Server
Management Studio (SSMS) to connect tenants1-mt-<User> and catalog-mt-<User> servers using Login
= developer, Password = P@ssword1.
In the Object Explorer, perform the following steps:
1. Expand the tenants1-mt-<User> server.
2. Expand the Databases node, and see tenants1 database containing multiple tenants.
3. Expand the catalog-mt-<User> server.
4. Verify that you see the analytics store and the jobaccount database.
See the following database items in the SSMS Object Explorer by expanding the analytics store node:
Tables TicketsRawData and EventsRawData hold raw extracted data from the tenant databases.
The star-schema tables are fact_Tickets , dim_Customers , dim_Venues , dim_Events , and dim_Dates .
The sp_ShredRawExtractedData stored procedure is used to populate the star-schema tables from the raw
data tables.

Data extraction
Create target groups
Before proceeding, ensure you have deployed the job account and jobaccount database. In the next set of steps,
Elastic Jobs is used to extract data from the sharded tenants database, and to store the data in the analytics
store. Then the second job shreds the data and stores it into tables in the star-schema. These two jobs run
against two different target groups, namely TenantGroup and AnalyticsGroup . The extract job runs against
the TenantGroup, which contains all the tenant databases. The shredding job runs against the AnalyticsGroup,
which contains just the analytics store. Create the target groups by using the following steps:
1. In SSMS, connect to the jobaccount database in catalog-mt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ TargetGroups.sql
3. Modify the @User variable at the top of the script, replacing <User> with the user value used when you
deployed the Wingtip Tickets SaaS Multi-tenant Database application.
4. Press F5 to run the script that creates the two target groups.
Extract raw data from all tenants
Transactions might occur more frequently for ticket and customer data than for event and venue data. Therefore,
consider extracting ticket and customer data separately and more frequently than you extract event and venue
data. In this section, you define and schedule two separate jobs:
Extract ticket and customer data.
Extract event and venue data.
Each job extracts its data, and posts it into the analytics store. There a separate job shreds the extracted data into
the analytics star-schema.
1. In SSMS, connect to the jobaccount database in catalog-mt-<User> server.
2. In SSMS, open ...\Learning Modules\Operational Analytics\Tenant Analytics\ExtractTickets.sql.
3. Modify @User at the top of the script, and replace <User> with the user name used when you deployed the
Wingtip Tickets SaaS Multi-tenant Database application.
4. Press F5 to run the script that creates and runs the job that extracts tickets and customers data from each
tenant database. The job saves the data into the analytics store.
5. Query the TicketsRawData table in the tenantanalytics database, to ensure that the table is populated with
tickets information from all tenants.

Repeat the preceding steps, except this time replace \ExtractTickets.sql with \ExtractVenuesEvents.sql in
step 2.
Successfully running the job populates the EventsRawData table in the analytics store with new events and
venues information from all tenants.

Data reorganization
Shred extracted data to populate star-schema tables
The next step is to shred the extracted raw data into a set of tables that are optimized for analytics queries. A
star-schema is used. A central fact table holds individual ticket sales records. Dimension tables are populated
with data about venues, events, customers, and purchase dates.
In this section of the tutorial, you define and run a job that merges the extracted raw data with the data in the
star-schema tables. After the merge job is finished, the raw data is deleted, leaving the tables ready to be
populated by the next tenant data extract job.
1. In SSMS, connect to the jobaccount database in catalog-mt-<User>.
2. In SSMS, open …\Learning Modules\Operational Analytics\Tenant Analytics\ShredRawExtractedData.sql.
3. Press F5 to run the script to define a job that calls the sp_ShredRawExtractedData stored procedure in the
analytics store.
4. Allow enough time for the job to run successfully.
Check the Lifecycle column of jobs.jobs_execution table for the status of job. Ensure that the job
Succeeded before proceeding. A successful run displays data similar to the following chart:

Data exploration
Visualize tenant data
The data in the star-schema table provides all the ticket sales data needed for your analysis. To make it easier to
see trends in large data sets, you need to visualize it graphically. In this section, you learn how to use Power BI
to manipulate and visualize the tenant data you have extracted and organized.
Use the following steps to connect to Power BI, and to import the views you created earlier:
1. Launch Power BI desktop.
2. From the Home ribbon, select Get Data , and select More… from the menu.
3. In the Get Data window, select Azure SQL Database.
4. In the database login window, enter your server name (catalog-mt-<User>.database.windows.net). Select
Impor t for Data Connectivity Mode , and then click OK.

5. Select Database in the left pane, then enter user name = developer, and enter password = P@ssword1.
Click Connect .

6. In the Navigator pane, under the analytics database, select the star-schema tables: fact_Tickets,
dim_Events, dim_Venues, dim_Customers and dim_Dates. Then select Load .
Congratulations! You have successfully loaded the data into Power BI. Now you can start exploring interesting
visualizations to help gain insights into your tenants. Next you walk through how analytics can enable you to
provide data-driven recommendations to the Wingtip Tickets business team. The recommendations can help to
optimize the business model and customer experience.
You start by analyzing ticket sales data to see the variation in usage across the venues. Select the following
options in Power BI to plot a bar chart of the total number of tickets sold by each venue. Due to random
variation in the ticket generator, your results may be different.
The preceding plot confirms that the number of tickets sold by each venue varies. Venues that sell more tickets
are using your service more heavily than venues that sell fewer tickets. There may be an opportunity here to
tailor resource allocation according to different tenant needs.
You can further analyze the data to see how ticket sales vary over time. Select the following options in Power BI
to plot the total number of tickets sold each day for a period of 60 days.

The preceding chart displays that ticket sales spike for some venues. These spikes reinforce the idea that some
venues might be consuming system resources disproportionately. So far there is no obvious pattern in when the
spikes occur.
Next you want to further investigate the significance of these peak sale days. When do these peaks occur after
tickets go on sale? To plot tickets sold per day, select the following options in Power BI.
The preceding plot shows that some venues sell a lot of tickets on the first day of sale. As soon as tickets go on
sale at these venues, there seems to be a mad rush. This burst of activity by a few venues might impact the
service for other tenants.
You can drill into the data again to see if this mad rush is true for all events hosted by these venues. In previous
plots, you observed that Contoso Concert Hall sells a lot of tickets, and that Contoso also has a spike in ticket
sales on certain days. Play around with Power BI options to plot cumulative ticket sales for Contoso Concert Hall,
focusing on sale trends for each of its events. Do all events follow the same sale pattern?

The preceding plot for Contoso Concert Hall shows that the mad rush does not happen for all events. Play
around with the filter options to see sale trends for other venues.
The insights into ticket selling patterns might lead Wingtip Tickets to optimize their business model. Instead of
charging all tenants equally, perhaps Wingtip should introduce service tiers with different compute sizes. Larger
venues that need to sell more tickets per day could be offered a higher tier with a higher service level
agreement (SLA). Those venues could have their databases placed in pool with higher per-database resource
limits. Each service tier could have an hourly sales allocation, with additional fees charged for exceeding the
allocation. Larger venues that have periodic bursts of sales would benefit from the higher tiers, and Wingtip
Tickets can monetize their service more efficiently.
Meanwhile, some Wingtip Tickets customers complain that they struggle to sell enough tickets to justify the
service cost. Perhaps in these insights there is an opportunity to boost ticket sales for under performing venues.
Higher sales would increase the perceived value of the service. Right click fact_Tickets and select New
measure . Enter the following expression for the new measure called AverageTicketsSold :

AverageTicketsSold = DIVIDE(DIVIDE(COUNTROWS(fact_Tickets),DISTINCT(dim_Venues[VenueCapacity]))*100,
COUNTROWS(dim_Events))

Select the following visualization options to plot the percentage tickets sold by each venue to determine their
relative success.

The preceding plot shows that even though most venues sell more than 80% of their tickets, some are
struggling to fill more than half the seats. Play around with the Values Well to select maximum or minimum
percentage of tickets sold for each venue.
Earlier you deepened your analysis to discover that ticket sales tend to follow predictable patterns. This
discovery might let Wingtip Tickets help underperforming venues boost ticket sales by recommending dynamic
pricing. This discovery could reveal an opportunity to employ machine learning techniques to predict ticket
sales for each event. Predictions could also be made for the impact on revenue of offering discounts on ticket
sales. Power BI Embedded could be integrated into an event management application. The integration could help
visualize predicted sales and the effect of different discounts. The application could help devise an optimum
discount to be applied directly from the analytics display.
You have observed trends in tenant data from the Wingtip Tickets SaaS Multi-tenant Database application. You
can contemplate other ways the app can inform business decisions for SaaS application vendors. Vendors can
better cater to the needs of their tenants. Hopefully this tutorial has equipped you with tools necessary to
perform analytics on tenant data to empower your businesses to make data-driven decisions.

Next steps
In this tutorial, you learned how to:
Deployed a tenant analytics database with pre-defined star schema tables
Used elastic jobs to extract data from all the tenant database
Merge the extracted data into tables in a star-schema designed for analytics
Query an analytics database
Use Power BI for data visualization to observe trends in tenant data
Congratulations!

Additional resources
Additional tutorials that build upon the Wingtip SaaS application.
Elastic Jobs.
Cross-tenant analytics using extracted data - single-tenant app
Azure CLI samples for Azure SQL Database and
SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can configure Azure SQL Database and SQL Managed Instance by using the Azure CLI.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Samples
Azure SQL Database
Azure SQL Managed Instance

The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure
SQL Database.

A REA DESC RIP T IO N

Create databases

Create a single database Creates an SQL Database and configures a server-level


firewall rule.

Create pooled databases Creates elastic pools, moves pooled databases, and changes
compute sizes.

Scale databases
A REA DESC RIP T IO N

Scale a single database Scales single database.

Scale pooled database Scales a SQL elastic pool to a different compute size.

Configure geo-replication

Single database Configures active geo-replication for a database in Azure


SQL Database and fails it over to the secondary replica.

Pooled database Configures active geo-replication for a database in an elastic


pool, then fails it over to the secondary replica.

Configure failover group

Configure failover group Configures a failover group for a group of databases and
failover over databases to the secondary server.

Single database Creates a database and a failover group, adds the database
to the failover group, then tests failover to the secondary
server.

Pooled database Creates a database, adds it to an elastic pool, adds the


elastic pool to the failover group, then tests failover to the
secondary server.

Back up, restore, copy, and impor t a database

Back up a database Backs up a database in SQL Database to an Azure storage


backup.

Restore a database Restores a database in SQL Database to a specific point in


time.

Copy a database to a new server Creates a copy of an existing database in SQL Database in a
new server.

Import a database from a BACPAC file Imports a database to SQL Database from a BACPAC file.

Learn more about the single-database Azure CLI API.


Create a single database and configure a firewall
rule using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates a single database in Azure SQL Database and configures a server-level
firewall rule. After the script has been successfully run, the database can be accessed from all Azure services and
the allowed IP address range.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Create a single database and configure a firewall rule


# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="create-and-configure-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
# Specify appropriate IP address values for your environment
# to limit access to the SQL Database server
startIp=0.0.0.0
endIp=0.0.0.0

echo "Using resource group $resourceGroup with login: $login, password: $password..."
echo "Creating $resourceGroup in $location..."
az group create --name $resourceGroup --location "$location" --tags $tag
echo "Creating $server in $location..."
az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
echo "Configuring firewall..."
az sql server firewall-rule create --resource-group $resourceGroup --server $server -n AllowYourIp --start-
ip-address $startIp --end-ip-address $endIp
echo "Creating $database on $server..."
az sql db create --resource-group $resourceGroup --server $server --name $database --sample-name
AdventureWorksLT --edition GeneralPurpose --family Gen5 --capacity 2 --zone-redundant true # zone redundancy
is only supported on premium and business critical service tiers

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.

C OMMAND DESC RIP T IO N

az sql server Server commands

az sql server firewall Server firewall commands.

az sql db Database commands.


Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Move a database in SQL Database in a SQL elastic
pool using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates two elastic pools, moves a pooled database in SQL Database from one
SQL elastic pool into another SQL elastic pool, and then moves the pooled database out of the SQL elastic pool
to be a single database in SQL Database.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Move a database in SQL Database in a SQL elastic pool

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="move-database-between-pools"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"

pool="msdocs-azuresql-pool-$randomIdentifier"
secondaryPool="msdocs-azuresql-secondary-pool-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $pool and $secondaryPool..."


az sql elastic-pool create --resource-group $resourceGroup --server $server --name $pool --edition
GeneralPurpose --family Gen5 --capacity 2
az sql elastic-pool create --resource-group $resourceGroup --server $server --name $secondaryPool --edition
GeneralPurpose --family Gen5 --capacity 2

echo "Creating $database in $pool..."


az sql db create --resource-group $resourceGroup --server $server --name $database --elastic-pool $pool

echo "Moving $database to $secondaryPool..." # create command updates an existing datatabase


az sql db create --resource-group $resourceGroup --server $server --name $database --elastic-pool
$secondaryPool

echo "Upgrade $database tier..."


az sql db create --resource-group $resourceGroup --server $server --name $database --service-objective S0

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
C OMMAND DESC RIP T IO N

az sql server Server commands.

az sql elastic-pools Elastic pool commands.

az sql db Database commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Monitor and scale a single database in Azure SQL
Database using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example scales a single database in Azure SQL Database to a different compute size after
querying the size information of the database.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Monitor and scale a single database in Azure SQL Database

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="monitor-and-scale-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server on $resource..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $database on $server..."


az sql db create --resource-group $resourceGroup --server $server --name $database --edition GeneralPurpose
--family Gen5 --capacity 2

echo "Monitoring size of $database..."


az sql db list-usages --name $database --resource-group $resourceGroup --server $server

echo "Scaling up $database..." # create command executes update if database already exists
az sql db create --resource-group $resourceGroup --server $server --name $database --edition GeneralPurpose
--family Gen5 --capacity 4

TIP
Use az sql db op list to get a list of operations performed on the database, and use az sql db op cancel to cancel an
update operation on the database.

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.
SC RIP T DESC RIP T IO N

az sql server Server commands.

az sql db show-usage Shows the size usage information for a database.

Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional CLI script samples can be found in Azure CLI sample scripts.
Scale an elastic pool in Azure SQL Database using
the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates elastic pools in Azure SQL Database, moves pooled databases, and
changes elastic pool compute sizes.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Scale an elastic pool in Azure SQL Database

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="scale-pool"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
databaseAdditional="msdocs-azuresql-additional-db-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="msdocs-azuresql-pool-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $pool..."


az sql elastic-pool create --resource-group $resourceGroup --server $server --name $pool --edition
GeneralPurpose --family Gen5 --capacity 2 --db-max-capacity 1 --db-min-capacity 1 --max-size 512GB

echo "Creating $database and $databaseAdditional on $server in $pool..."


az sql db create --resource-group $resourceGroup --server $server --name $database --elastic-pool $pool
az sql db create --resource-group $resourceGroup --server $server --name $databaseAdditional --elastic-pool
$pool

echo "Scaling $pool..."


az sql elastic-pool update --resource-group $resourceGroup --server $server --name $pool --capacity 10 --
max-size 1536GB

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.

C OMMAND DESC RIP T IO N

az sql server Server commands.


C OMMAND DESC RIP T IO N

az sql db Database commands.

az sql elastic-pools Elastic pool commands.

Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure active geo-replication for a single
database in Azure SQL Database using the Azure
CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example configures active geo-replication for a single database and fails it over to a
secondary replica of the database.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Configure active geo-replication for a single database in Azure SQL Database

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-single-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"

failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
failoverLocation="Central US"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location and $failoverResourceGroup in $failoverLocation..."


az group create --name $resourceGroup --location "$location" --tags $tag
az group create --name $failoverResourceGroup --location "$failoverLocation"

echo "Creating $server in $location and $secondaryServer in $failoverLocation..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server create --name $secondaryServer --resource-group $failoverResourceGroup --location
"$failoverLocation" --admin-user $login --admin-password $password

echo "Creating $database on $server..."


az sql db create --name $database --resource-group $resourceGroup --server $server --service-objective S0

echo "Establishing geo-replication on $database..."


az sql db replica create --name $database --partner-server $secondaryServer --resource-group $resourceGroup
--server $server --partner-resource-group $failoverResourceGroup
az sql db replica list-links --name $database --resource-group $resourceGroup --server $server

echo "Initiating failover..."


az sql db replica set-primary --name $database --resource-group $failoverResourceGroup --server
$secondaryServer

echo "Monitoring health of $database..."


az sql db replica list-links --name $database --resource-group $failoverResourceGroup --server
$secondaryServer

echo "Removing replication link after failover..."


az sql db replica delete-link --resource-group $failoverResourceGroup --server $secondaryServer --name
$database --partner-server $server --yes

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql db replica Database replica commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure active geo-replication for a pooled
database in Azure SQL Database using the Azure
CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example configures active geo-replication for a pooled database in Azure SQL Database
and fails it over to the secondary replica of the database.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Configure active geo-replication for a pooled database in Azure SQL Database

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-elastic-pool"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="pool-$randomIdentifier"
failoverLocation="Central US"
failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
secondaryPool="msdocs-azuresql-secondary-pool-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location and $failoverResourceGroup in $failoverLocation..."


az group create --name $resourceGroup --location "$location" --tags $tag
az group create --name $failoverResourceGroup --location "$failoverLocation"

echo "Creating $server in $location and $secondaryServer in $failoverLocation..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server create --name $secondaryServer --resource-group $failoverResourceGroup --location
"$failoverLocation" --admin-user $login --admin-password $password

echo "Creating $pool on $server and $secondaryPool on $secondaryServer..."


az sql elastic-pool create --name $pool --resource-group $resourceGroup --server $server --capacity 50 --db-
dtu-max 50 --db-dtu-min 10 --edition "Standard"
az sql elastic-pool create --name $secondaryPool --resource-group $failoverResourceGroup --server
$secondaryServer --capacity 50 --db-dtu-max 50 --db-dtu-min 10 --edition "Standard"

echo "Creating $database in $pool..."


az sql db create --name $database --resource-group $resourceGroup --server $server --elastic-pool $pool

echo "Establishing geo-replication for $database between $server and $secondaryServer..."


az sql db replica create --name $database --partner-server $secondaryServer --resource-group $resourceGroup
--server $server --elastic-pool $secondaryPool --partner-resource-group $failoverResourceGroup

echo "Initiating failover to $secondaryServer..."


az sql db replica set-primary --name $database --resource-group $failoverResourceGroup --server
$secondaryServer

echo "Monitoring health of $database on $secondaryServer..."


az sql db replica list-links --name $database --resource-group $failoverResourceGroup --server
$secondaryServer

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.
az group delete --name $resourceGroup
az group delete --name $secondaryResourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql elastic-pool Elastic pool commands

az sql db replica Database replication commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Configure a failover group for a group of databases
in Azure SQL Database using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'


For more information, see set active subscription or log in interactively
Run the script

# Configure a failover group for a group of databases in Azure SQL Database

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="setup-geodr-and-failover-database-failover-group"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
failoverGroup="msdocs-azuresql-failover-group-$randomIdentifier"
failoverLocation="Central US"
failoverResourceGroup="msdocs-azuresql-failover-rg-$randomIdentifier"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"

echo "Using resource groups $resourceGroup and $failoverResourceGroup with login: $login, password:
$password..."

echo "Creating $resourceGroup in $Location and $failoverResourceGroup in $failoverLocation..."


az group create --name $resourceGroup --location "$location" --tags $tag
az group create --name $failoverResourceGroup --location "$failoverLocation"

echo "Creating $server in $location and $secondaryServer in $failoverLocation..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server create --name $secondaryServer --resource-group $failoverResourceGroup --location
"$failoverLocation" --admin-user $login --admin-password $password

echo "Creating $database..."


az sql db create --name $database --resource-group $resourceGroup --server $server --service-objective S0

echo "Creating failover group $failoverGroup..."


az sql failover-group create --name $failoverGroup --partner-server $secondaryServer --resource-group
$resourceGroup --server $server --partner-resource-group $failoverResourceGroup

echo "Initiating failover..."


az sql failover-group set-primary --name $failoverGroup --resource-group $failoverResourceGroup --server
$secondaryServer

echo "Monitoring failover..."


az sql failover-group show --name $failoverGroup --resource-group $resourceGroup --server $server

echo "Removing replication on $database..."


az sql failover-group delete --name $failoverGroup --resource-group $failoverResourceGroup --server
$secondaryServer

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $failoverResourceGroup -y


az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql failover-group create Creates a failover group.

az sql failover-group set-primary Set the primary of the failover group by failing over all
databases from the current primary server

az sql failover-group show Gets a failover group

az sql failover-group delete Deletes a failover group

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Add a database to a failover group using the Azure
CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates a database in Azure SQL Database, creates a failover group, adds the
database to it, and tests failover.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Add an Azure SQL Database to an auto-failover group

# VariableBlock
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="add-single-db-to-failover-group-az-cli"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
failoverGroup="msdocs-azuresql-failover-group-$randomIdentifier"
failoverLocation="Central US"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $database on $server..."


az sql db create --name $database --resource-group $resourceGroup --server $server --sample-name
AdventureWorksLT
echo "Creating $secondaryServer in $failoverLocation..."
az sql server create --name $secondaryServer --resource-group $resourceGroup --location "$failoverLocation"
--admin-user $login --admin-password $password
echo "Creating $failoverGroup between $server and $secondaryServer..."
az sql failover-group create --name $failoverGroup --partner-server $secondaryServer --resource-group
$resourceGroup --server $server --failover-policy Automatic --grace-period 2 --add-db $database
echo "Confirming the role of each server in the failover group..." # note ReplicationRole property
az sql failover-group show --name $failoverGroup --resource-group $resourceGroup --server $server
echo "Failing over to $secondaryServer..."
az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server
$secondaryServer

echo "Confirming role of $secondaryServer is now primary..." # note ReplicationRole property


az sql failover-group show --name $failoverGroup --resource-group $resourceGroup --server $server
echo "Failing back to $server...."
az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server $server

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup


Sample reference
This script uses the following commands. Each command in the table links to command-specific documentation.

C OMMAND DESC RIP T IO N

az sql db Database commands.

az sql failover-group Failover group commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Add an Azure SQL Database elastic pool to a
failover group using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates a single database, adds it to an elastic pool, creates a failover group, and
tests failover.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Add an Azure SQL Database elastic pool to a failover group


# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="add-elastic-pool-to-failover-group-az-cli"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
pool="msdocs-azuresql-pool-$randomIdentifier"
failoverGroup="msdocs-azuresql-failover-group-$randomIdentifier"
failoverLocation="Central US"
secondaryServer="msdocs-azuresql-secondary-server-$randomIdentifier"
echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $database on $server..."


az sql db create --name $database --resource-group $resourceGroup --server $server --sample-name
AdventureWorksLT

echo "Creating $pool on $server..."


az sql elastic-pool create --name $pool --resource-group $resourceGroup --server $server
echo "Adding $database to $pool..."
az sql db update --elastic-pool $pool --name $database --resource-group $resourceGroup --server $server
echo "Creating $secondaryServer in $failoverLocation..."
az sql server create --name $secondaryServer --resource-group $resourceGroup --location "$failoverLocation"
--admin-user $login --admin-password $password
echo "Creating $pool on $secondaryServer..."
az sql elastic-pool create --name $pool --resource-group $resourceGroup --server $secondaryServer
echo "Creating $failoverGroup between $server and $secondaryServer..."
az sql failover-group create --name $failoverGroup --partner-server $secondaryServer --resource-group
$resourceGroup --server $server --failover-policy Automatic --grace-period 2
databaseId=$(az sql elastic-pool list-dbs --name $pool --resource-group $resourceGroup --server $server --
query [0].name -o json | tr -d '"')
echo "Adding $database to $failoverGroup..."
az sql failover-group update --name $failoverGroup --add-db $databaseId --resource-group $resourceGroup --
server $server
echo "Confirming the role of each server in the failover group..." # note ReplicationRole property
az sql failover-group show --name $failoverGroup --resource-group $resourceGroup --server $server
echo "Failing over to $secondaryServer..."
az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server
$secondaryServer

echo "Confirming role of $secondaryServer is now primary..." # note ReplicationRole property


az sql failover-group show --name $failoverGroup --resource-group $resourceGroup --server $server
echo "Failing back to $server...."
az sql failover-group set-primary --name $failoverGroup --resource-group $resourceGroup --server $server

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql elastic-pool Elastic pool commands.

az sql failover-group Failover group commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Backup an Azure SQL single database to an Azure
storage container using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI example backs up a database in SQL Database to an Azure storage container.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Backup an Azure SQL single database to an Azure storage container

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="backup-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
storage="msdocsazuresql$randomIdentifier"
container="msdocs-azuresql-container-$randomIdentifier"
bacpac="backup.bacpac"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $storage..."


az storage account create --name $storage --resource-group $resourceGroup --location "$location" --sku
Standard_LRS

echo "Creating $container on $storage..."


key=$(az storage account keys list --account-name $storage --resource-group $resourceGroup -o json --query
[0].value | tr -d '"')
az storage container create --name $container --account-key $key --account-name $storage

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server firewall-rule create --resource-group $resourceGroup --server $server --name
AllowAzureServices --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

echo "Creating $database..."


az sql db create --name $database --resource-group $resourceGroup --server $server --edition GeneralPurpose
--sample-name AdventureWorksLT

echo "Backing up $database..."


az sql db export --admin-password $password --admin-user $login --storage-key $key --storage-key-type
StorageAccessKey --storage-uri "https://$storage.blob.core.windows.net/$container/$bacpac" --name $database
--resource-group $resourceGroup --server $server

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup


Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND N OT ES

az sql server Server commands.

az sql db Database commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Restore a single database in Azure SQL Database to
an earlier point in time using the Azure CLI
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI example restores a single database in Azure SQL Database to a specific point in time.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script
# Restore a single database in Azure SQL Database to an earlier point in time

# Use Bash rather than Cloud Shell due to its timeout at 20 minutes when no interactive activity
# In Windows, run Bash in a Docker container to sync time zones between Azure and Bash.

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-sql-rg-$randomIdentifier"
tag="restore-database"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
restoreServer="restoreServer-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in "$location"..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password

echo "Creating $database on $server..."


az sql db create --resource-group $resourceGroup --server $server --name $database --service-objective S0

# Sleeping commands to wait long enough for automatic backup to be created


echo "Sleeping..."
sleep 30m

# Restore a server from backup to a new server


# To specify a specific point-in-time (in UTC) to restore from, use the ISO8601 format:
# restorePoint=“2021-07-09T13:10:00Z”
restorePoint=$(date +%s)
restorePoint=$(expr $restorePoint - 60)
restorePoint=$(date -d @$restorePoint +"%Y-%m-%dT%T")
echo $restorePoint

echo "Restoring to $restoreServer"


az sql db restore --dest-name $restoreServer --edition Standard --name $database --resource-group
$resourceGroup --server $server --service-objective S0 --time $restorePoint

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql db restore Restore database command.


Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Copy a database in Azure SQL Database to a new
server using the Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example creates a copy of an existing database in a new server.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
Launch Azure Cloud Shell
The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common
Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com.
When Cloud Shell opens, verify that Bash is selected for your environment. Subsequent sessions will use Azure
CLI in a Bash environment, Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press Enter
to run it.
Sign in to Azure
Cloud Shell is automatically authenticated under the initial account signed-in with. Use the following script to
sign in using a different subscription, replacing <Subscription ID> with your Azure Subscription ID. If you don't
have an Azure subscription, create an Azure free account before you begin.
subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script

# Copy a database in Azure SQL Database to a new server

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="copy-database-to-new-server"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
targetResourceGroup="msdocs-azuresql-targetrg-$randomIdentifier"
targetLocation="Central US"
targetServer="msdocs-azuresql-targetServer-$randomIdentifier"
targetDatabase="msdocs-azuresql-targetDatabase-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in location $location and $targetResourceGroup in $targetLocation..."


az group create --name $resourceGroup --location "$location" --tags $tag
az group create --name $targetResourceGroup --location "$targetLocation"

echo "Creating $server in $location and $targetServer in $targetLocation..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server create --name $targetServer --resource-group $targetResourceGroup --location "$targetLocation"
--admin-user $login --admin-password $password

echo "Creating $database on $server..."


az sql db create --name $database --resource-group $resourceGroup --server $server --service-objective S0

echo "Copying $database on $server to $targetDatabase on $targetServer..."


az sql db copy --dest-name $targetDatabase --dest-resource-group $targetResourceGroup --dest-server
$targetServer --name $database --resource-group $resourceGroup --server $server

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $targetResourceGroup


az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N


C OMMAND DESC RIP T IO N

az sql db copy Creates a copy of a database that uses the snapshot at the
current time.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Import a BACPAC file into a database in SQL
Database using the Azure CLI
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database


This Azure CLI script example imports a database from a .bacpac file into a database in SQL Database.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script
# Import a BACPAC file into a database in SQL Database
# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="import-from-bacpac"
server="msdocs-azuresql-server-$randomIdentifier"
database="msdocsazuresqldb$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"
storage="msdocsazuresql$randomIdentifier"
container="msdocs-azuresql-container-$randomIdentifier"
bacpac="sample.bacpac"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $storage..."


az storage account create --name $storage --resource-group $resourceGroup --location "$location" --sku
Standard_LRS

echo "Creating $container on $storage..."


key=$(az storage account keys list --account-name $storage --resource-group $resourceGroup -o json --query
[0].value | tr -d '"')

az storage container create --name $container --account-key $key --account-name $storage #--public-access
container

echo "Downloading sample database..."


az rest --uri https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-
v1.0/WideWorldImporters-Standard.bacpac --output-file $bacpac -m get --skip-authorization-header

echo "Uploading sample database to $container..."


az storage blob upload --container-name $container --file $bacpac --name $bacpac --account-key $key --
account-name $storage

echo "Creating $server in $location..."


az sql server create --name $server --resource-group $resourceGroup --location "$location" --admin-user
$login --admin-password $password
az sql server firewall-rule create --resource-group $resourceGroup --server $server --name
AllowAzureServices --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

echo "Creating $database..."


az sql db create --name $database --resource-group $resourceGroup --server $server --edition
"GeneralPurpose"

echo "Importing sample database from $container to $database..."


az sql db import --admin-password $password --admin-user $login --storage-key $key --storage-key-type
StorageAccessKey --storage-uri https://$storage.blob.core.windows.net/$container/$bacpac --name $database --
resource-group $resourceGroup --server $server

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql server Server commands.

az sql db import Database import command.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure PowerShell samples for Azure SQL Database
and Azure SQL Managed Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance enable you to configure your databases, instances, and
pools using Azure PowerShell.
If you don't have an Azure subscription, create an Azure free account before you begin.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article, without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux, or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
If you choose to install and use the PowerShell locally, this tutorial requires AZ PowerShell 1.4.0 or later. If you
need to upgrade, see Install Azure PowerShell module. If you are running PowerShell locally, you also need to
run Connect-AzAccount to create a connection with Azure.

Azure SQL Database


Azure SQL Managed Instance

The following table includes links to sample Azure PowerShell scripts for Azure SQL Database.
L IN K DESC RIP T IO N

Create and configure single databases and elastic


pools

Create a single database and configure a server-level firewall This PowerShell script creates a single database and
rule configures a server-level IP firewall rule.

Create elastic pools and move pooled databases This PowerShell script creates elastic pools, moves pooled
databases, and changes compute sizes.

Configure geo-replication and failover

Configure and fail over a single database using active geo- This PowerShell script configures active geo-replication for a
replication single database and fails it over to the secondary replica.

Configure and fail over a pooled database using active geo- This PowerShell script configures active geo-replication for a
replication database in an elastic pool and fails it over to the secondary
replica.

Configure a failover group

Configure a failover group for a single database This PowerShell script creates a database and a failover
group, adds the database to the failover group, and tests
failover to the secondary server.

Configure a failover group for an elastic pool This PowerShell script creates a database, adds it to an elastic
pool, adds the elastic pool to the failover group, and tests
failover to the secondary server.

Scale a single database and an elastic pool

Scale a single database This PowerShell script monitors the performance metrics of a
single database, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.

Scale an elastic pool This PowerShell script monitors the performance metrics of
an elastic pool, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.

Restore, copy, and impor t a database

Restore a database This PowerShell script restores a database from a geo-


redundant backup and restores a deleted database to the
latest backup.

Copy a database to a new server This PowerShell script creates a copy of an existing database
in a new server.

Import a database from a bacpac file This PowerShell script imports a database into Azure SQL
Database from a bacpac file.

Sync data between databases

Sync data between databases This PowerShell script configures Data Sync to sync between
multiple databases in Azure SQL Database.
L IN K DESC RIP T IO N

Sync data between SQL Database and SQL Server on- This PowerShell script configures Data Sync to sync between
premises a database in Azure SQL Database and a SQL Server on-
premises database.

Update the SQL Data Sync sync schema This PowerShell script adds or removes items from the Data
Sync sync schema.

Learn more about the Single-database Azure PowerShell API.

Additional resources
The examples listed on this page use the PowerShell cmdlets for creating and managing Azure SQL resources.
Additional cmdlets for running queries and performing many database tasks are located in the sqlserver
module. For more information, see SQL Server PowerShell.
Azure Resource Manager templates for Azure SQL
Database & SQL Managed Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure Resource Manager templates enable you to define your infrastructure as code and deploy your solutions
to the Azure cloud for Azure SQL Database and Azure SQL Managed Instance.

Azure SQL Database


Azure SQL Managed Instance

The following table includes links to Azure Resource Manager templates for Azure SQL Database.

L IN K DESC RIP T IO N

SQL Database This Azure Resource Manager template creates a single


database in Azure SQL Database and configures server-level
IP firewall rules.

Server This Azure Resource Manager template creates a server for


Azure SQL Database.

Elastic pool This template allows you to deploy an elastic pool and to
assign databases to it.

Failover groups This template creates two servers, a single database, and a
failover group in Azure SQL Database.

Threat Detection This template allows you to deploy a server and a set of
databases with Threat Detection enabled, with an email
address for alerts for each database. Threat Detection is part
of the SQL Advanced Threat Protection (ATP) offering and
provides a layer of security that responds to potential
threats over servers and databases.

Auditing to Azure Blob storage This template allows you to deploy a server with auditing
enabled to write audit logs to a Blob storage. Auditing for
Azure SQL Database tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.

Auditing to Azure Event Hub This template allows you to deploy a server with auditing
enabled to write audit logs to an existing event hub. In order
to send audit events to Event Hubs, set auditing settings
with Enabled State , and set
IsAzureMonitorTargetEnabled as true . Also, configure
Diagnostic Settings with the SQLSecurityAuditEvents log
category on the master database (for server-level
auditing). Auditing tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
L IN K DESC RIP T IO N

Azure Web App with SQL Database This sample creates a free Azure web app and a database in
Azure SQL Database at the "Basic" service level.

Azure Web App and Redis Cache with SQL Database This template creates a web app, Redis Cache, and database
in the same resource group and creates two connection
strings in the web app for the database and Redis Cache.

Import data from Blob storage using ADF V2 This Azure Resource Manager template creates an instance
of Azure Data Factory V2 that copies data from Azure Blob
storage to SQL Database.

HDInsight cluster with a database This template allows you to create an HDInsight cluster, a
logical SQL server, a database, and two tables. This template
is used by the Use Sqoop with Hadoop in HDInsight article.

Azure Logic App that runs a SQL Stored Procedure on a This template allows you to create a logic app that will run a
schedule SQL stored procedure on schedule. Any arguments for the
procedure can be put into the body section of the template.

Provision server with Azure AD-only authentication enabled This template creates a SQL logical server with an Azure AD
admin set for the server and Azure AD-only authentication
enabled.
Azure Resource Graph sample queries for Azure
SQL Database
7/12/2022 • 2 minutes to read • Edit Online

This page is a collection of Azure Resource Graph sample queries for Azure SQL Database. For a complete list of
Azure Resource Graph samples, see Resource Graph samples by Category and Resource Graph samples by
Table.

Sample queries
List SQL Databases and their elastic pools
The following query uses leftouter join to bring together SQL Database resources and their related elastic
pools, if they've any.

Resources
| where type =~ 'microsoft.sql/servers/databases'
| project databaseId = id, databaseName = name, elasticPoolId = tolower(tostring(properties.elasticPoolId))
| join kind=leftouter (
Resources
| where type =~ 'microsoft.sql/servers/elasticpools'
| project elasticPoolId = tolower(id), elasticPoolName = name, elasticPoolState = properties.state)
on elasticPoolId
| project-away elasticPoolId1

Azure CLI
Azure PowerShell
Portal

az graph query -q "Resources | where type =~ 'microsoft.sql/servers/databases' | project databaseId = id,


databaseName = name, elasticPoolId = tolower(tostring(properties.elasticPoolId)) | join kind=leftouter (
Resources | where type =~ 'microsoft.sql/servers/elasticpools' | project elasticPoolId = tolower(id),
elasticPoolName = name, elasticPoolState = properties.state) on elasticPoolId | project-away elasticPoolId1"

Next steps
Learn more about the query language.
Learn more about how to explore resources.
See samples of Starter language queries.
See samples of Advanced language queries.
What is Azure SQL Managed Instance?
7/12/2022 • 16 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL
Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a
service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition)
database engine, providing a native virtual network (VNet) implementation that addresses common security
concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows
existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal
application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities
(automatic patching and version updates, automated backups, high availability) that drastically reduce
management overhead and TCO.
If you're new to Azure SQL Managed Instance, check out the Azure SQL Managed Instance video from our in-
depth Azure SQL video series:

IMPORTANT
For a list of regions where SQL Managed Instance is currently available, see Supported regions.

The following diagram outlines key features of SQL Managed Instance:

Azure SQL Managed Instance is designed for customers looking to migrate a large number of apps from an on-
premises or IaaS, self-built, or ISV provided environment to a fully managed PaaS cloud environment, with as
low a migration effort as possible. Using the fully automated Azure Data Migration Service, customers can lift
and shift their existing SQL Server instance to SQL Managed Instance, which offers compatibility with SQL
Server and complete isolation of customer instances with native VNet support. For more information on
migration options and tools, see Migration overview: SQL Server to Azure SQL Managed Instance.
With Software Assurance, you can exchange your existing licenses for discounted rates on SQL Managed
Instance using the Azure Hybrid Benefit for SQL Server. SQL Managed Instance is the best migration destination
in the cloud for SQL Server instances that require high security and a rich programmability surface.
Key features and capabilities
SQL Managed Instance combines the best features that are available both in Azure SQL Database and the SQL
Server database engine.

IMPORTANT
SQL Managed Instance runs with all of the features of the most recent version of SQL Server, including online operations,
automatic plan corrections, and other enterprise performance enhancements. A comparison of the features available is
explained in Feature comparison: Azure SQL Managed Instance versus SQL Server.

PA A S B EN EF IT S B USIN ESS C O N T IN UIT Y

No hardware purchasing and management 99.99% uptime SLA


No management overhead for managing underlying Built-in high availability
infrastructure Data protected with automated backups
Quick provisioning and service scaling Customer configurable backup retention period
Automated patching and version upgrade User-initiated backups
Integration with other PaaS data services Point-in-time database restore capability

Security and compliance Management

Isolated environment (VNet integration, single tenant Azure Resource Manager API for automating service
service, dedicated compute and storage) provisioning and scaling
Transparent data encryption (TDE) Azure portal functionality for manual service provisioning
Azure Active Directory (Azure AD) authentication, single and scaling
sign-on support Data Migration Service
Azure AD server principals (logins)
What is Windows Authentication for Azure AD principals
(Preview)
Adheres to compliance standards same as Azure SQL
Database
SQL auditing
Advanced Threat Protection

IMPORTANT
Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the
Microsoft Azure Compliance Offerings, where you can find the most current list of SQL Managed Instance compliance
certifications, listed under SQL Database .

The key features of SQL Managed Instance are shown in the following table:

F EAT URE DESC RIP T IO N

SQL Server version/build SQL Server database engine (latest stable)

Managed automated backups Yes

Built-in instance and database monitoring and metrics Yes

Automatic software patching Yes

The latest database engine features Yes


F EAT URE DESC RIP T IO N

Number of data files (ROWS) per the database Multiple

Number of log files (LOG) per database 1

VNet - Azure Resource Manager deployment Yes

VNet - Classic deployment model No

Portal support Yes

Built-in Integration Service (SSIS) No - SSIS is a part of Azure Data Factory PaaS

Built-in Analysis Service (SSAS) No - SSAS is separate PaaS

Built-in Reporting Service (SSRS) No - use Power BI paginated reports instead or host SSRS
on an Azure VM. While SQL Managed Instance cannot run
SSRS as a service, it can host SSRS catalog databases for a
reporting server installed on Azure Virtual Machine, using
SQL Server authentication.

vCore-based purchasing model


The vCore-based purchasing model for SQL Managed Instance gives you flexibility, control, transparency, and a
straightforward way to translate on-premises workload requirements to the cloud. This model allows you to
change compute, memory, and storage based upon your workload needs. The vCore model is also eligible for
up to 55 percent savings with the Azure Hybrid Benefit for SQL Server.
In the vCore model, you can choose hardware configurations as follows:
Standard Series (Gen5) logical CPUs are based on Intel® E5-2673 v4 (Broadwell) 2.3 GHz, Intel® SP-
8160 (Skylake), and Intel® 8272CL (Cascade Lake) 2.5 GHz processors, with 5.1 GB of RAM per CPU
vCore , fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores.
Premium Series logical CPUs are based on Intel® 8370C (Ice Lake) 2.8 GHz processors, with 7 GB of
RAM per CPU vCore , fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80
cores.
Premium Series Memor y-Optimized logical CPUs are based on Intel® 8370C (Ice Lake) 2.8 GHz
processors, with 13.6 GB of RAM per CPU vCore , fast NVMe SSD, hyper-threaded logical core, and
compute sizes between 4 and 64 cores.
Find more information about the difference between hardware configurations in SQL Managed Instance
resource limits.

Service tiers
SQL Managed Instance is available in two service tiers:
General purpose : Designed for applications with typical performance and I/O latency requirements.
Business Critical : Designed for applications with low I/O latency requirements and minimal impact of
underlying maintenance operations on the workload.
Both service tiers guarantee 99.99% availability and enable you to independently select storage size and
compute capacity. For more information on the high availability architecture of Azure SQL Managed Instance,
see High availability and Azure SQL Managed Instance.
General Purpose service tier
The following list describes key characteristics of the General Purpose service tier:
Designed for the majority of business applications with typical performance requirements
High-performance Azure Blob storage (16 TB)
Built-in high availability based on reliable Azure Blob storage and Azure Service Fabric
For more information, see Storage layer in the General Purpose tier and Storage performance best practices and
considerations for SQL Managed Instance (General Purpose).
Find more information about the difference between service tiers in SQL Managed Instance resource limits.
Business Critical service tier
The Business Critical service tier is built for applications with high I/O requirements. It offers the highest
resilience to failures using several isolated replicas.
The following list outlines the key characteristics of the Business Critical service tier:
Designed for business applications with highest performance and HA requirements
Comes with super-fast local SSD storage (up to 4 TB on Standard Series (Gen5), up to 5.5 TB on Premium
Series and up to 16 TB on Premium Series Memory-Optimized)
Built-in high availability based on Always On availability groups and Azure Service Fabric
Built-in additional read-only database replica that can be used for reporting and other read-only workloads
In-Memory OLTP that can be used for workload with high-performance requirements
Find more information about the differences between service tiers in SQL Managed Instance resource limits.

Management operations
Azure SQL Managed Instance provides management operations that you can use to automatically deploy new
managed instances, update instance properties, and delete instances when no longer needed. Detailed
explanation of management operations can be found on managed instance management operations overview
page.

Advanced security and compliance


SQL Managed Instance comes with advanced security features provided by the Azure platform and the SQL
Server database engine.
Security isolation
SQL Managed Instance provides additional security isolation from other tenants on the Azure platform. Security
isolation includes:
Native virtual network implementation and connectivity to your on-premises environment using Azure
ExpressRoute or VPN Gateway.
In a default deployment, the SQL endpoint is exposed only through a private IP address, allowing safe
connectivity from private Azure or hybrid networks.
Single-tenant with dedicated underlying infrastructure (compute, storage).
The following diagram outlines various connectivity options for your applications:
To learn more details about VNet integration and networking policy enforcement at the subnet level, see VNet
architecture for managed instances and Connect your application to a managed instance.

IMPORTANT
Place multiple managed instances in the same subnet, wherever that is allowed by your security requirements, as that will
bring you additional benefits. Co-locating instances in the same subnet will significantly simplify networking infrastructure
maintenance and reduce instance provisioning time, since a long provisioning duration is associated with the cost of
deploying the first managed instance in a subnet.

Security features
Azure SQL Managed Instance provides a set of advanced security features that can be used to protect your data.
SQL Managed Instance auditing tracks database events and writes them to an audit log file placed in your
Azure storage account. Auditing can help you maintain regulatory compliance, understand database activity,
and gain insight into discrepancies and anomalies that could indicate business concerns or suspected
security violations.
Data encryption in motion - SQL Managed Instance secures your data by providing encryption for data in
motion using Transport Layer Security. In addition to Transport Layer Security, SQL Managed Instance offers
protection of sensitive data in flight, at rest, and during query processing with Always Encrypted. Always
Encrypted offers data security against breaches involving the theft of critical data. For example, with Always
Encrypted, credit card numbers are stored encrypted in the database always, even during query processing,
allowing decryption at the point of use by authorized staff or applications that need to process that data.
Advanced Threat Protection complements auditing by providing an additional layer of security intelligence
built into the service that detects unusual and potentially harmful attempts to access or exploit databases.
You are alerted about suspicious activities, potential vulnerabilities, and SQL injection attacks, as well as
anomalous database access patterns. Advanced Threat Protection alerts can be viewed from Microsoft
Defender for Cloud. They provide details of suspicious activity and recommend action on how to investigate
and mitigate the threat.
Dynamic data masking limits sensitive data exposure by masking it to non-privileged users. Dynamic data
masking helps prevent unauthorized access to sensitive data by enabling you to designate how much of the
sensitive data to reveal with minimal impact on the application layer. It's a policy-based security feature that
hides the sensitive data in the result set of a query over designated database fields, while the data in the
database is not changed.
Row-level security (RLS) enables you to control access to rows in a database table based on the
characteristics of the user executing a query (such as by group membership or execution context). RLS
simplifies the design and coding of security in your application. RLS enables you to implement restrictions on
data row access. For example, ensuring that workers can access only the data rows that are pertinent to their
department, or restricting a data access to only the relevant data.
Transparent data encryption (TDE) encrypts SQL Managed Instance data files, known as encrypting data at
rest. TDE performs real-time I/O encryption and decryption of the data and log files. The encryption uses a
database encryption key (DEK), which is stored in the database boot record for availability during recovery.
You can protect all your databases in a managed instance with transparent data encryption. TDE is proven
encryption-at-rest technology in SQL Server that is required by many compliance standards to protect
against theft of storage media.
Migration of an encrypted database to SQL Managed Instance is supported via Azure Database Migration
Service or native restore. If you plan to migrate an encrypted database using native restore, migration of the
existing TDE certificate from the SQL Server instance to SQL Managed Instance is a required step. For more
information about migration options, see SQL Server to Azure SQL Managed Instance Guide.

Azure Active Directory integration


SQL Managed Instance supports traditional SQL Server database engine logins and logins integrated with Azure
AD. Azure AD server principals (logins) are an Azure cloud version of on-premises database logins that you are
using in your on-premises environment. Azure AD server principals (logins) enable you to specify users and
groups from your Azure AD tenant as true instance-scoped principals, capable of performing any instance-level
operation, including cross-database queries within the same managed instance.
A new syntax is introduced to create Azure AD server principals (logins), FROM EXTERNAL PROVIDER . For
more information on the syntax, see CREATE LOGIN, and review the Provision an Azure Active Directory
administrator for SQL Managed Instance article.
Azure Active Directory integration and multi-factor authentication
SQL Managed Instance enables you to centrally manage identities of database users and other Microsoft
services with Azure Active Directory integration. This capability simplifies permission management and
enhances security. Azure Active Directory supports multi-factor authentication to increase data and application
security while supporting a single sign-on process.
Authentication
SQL Managed Instance authentication refers to how users prove their identity when connecting to the database.
SQL Managed Instance supports three types of authentication:
SQL Authentication :
This authentication method uses a username and password.
Azure Active Director y Authentication :
This authentication method uses identities managed by Azure Active Directory and is supported for
managed and integrated domains. Use Active Directory authentication (integrated security) whenever
possible.
Windows Authentication for Azure AD Principals (Preview) :
Kerberos authentication for Azure AD Principals (Preview) enables Windows Authentication for Azure
SQL Managed Instance. Windows Authentication for managed instances empowers customers to move
existing services to the cloud while maintaining a seamless user experience and provides the basis for
infrastructure modernization.
Authorization
Authorization refers to what a user can do within a database in Azure SQL Managed Instance, and is controlled
by your user account's database role memberships and object-level permissions. SQL Managed Instance has the
same authorization capabilities as SQL Server 2017.

Database migration
SQL Managed Instance targets user scenarios with mass database migration from on-premises or IaaS database
implementations. SQL Managed Instance supports several database migration options that are discussed in the
migration guides. See Migration overview: SQL Server to Azure SQL Managed Instance for more information.
Backup and restore
The migration approach leverages SQL backups to Azure Blob storage. Backups stored in an Azure storage blob
can be directly restored into a managed instance using the T-SQL RESTORE command.
For a quickstart showing how to restore the Wide World Importers - Standard database backup file, see
Restore a backup file to a managed instance. This quickstart shows that you have to upload a backup file to
Azure Blob storage and secure it using a shared access signature (SAS) key.
For information about restore from URL, see Native RESTORE from URL.

IMPORTANT
Backups from a managed instance can only be restored to another managed instance. They cannot be restored to a SQL
Server instance or to Azure SQL Database.

Database Migration Service


Azure Database Migration Service is a fully managed service designed to enable seamless migrations from
multiple database sources to Azure data platforms with minimal downtime. This service streamlines the tasks
required to move existing third-party and SQL Server databases to Azure SQL Database, Azure SQL Managed
Instance, and SQL Server on Azure VM. See How to migrate your on-premises database to SQL Managed
Instance using Database Migration Service.

SQL features supported


SQL Managed Instance aims to deliver close to 100% surface area compatibility with the latest SQL Server
version through a staged release plan. For a features and comparison list, see SQL Managed Instance feature
comparison, and for a list of T-SQL differences in SQL Managed Instance versus SQL Server, see SQL Managed
Instance T-SQL differences from SQL Server.
SQL Managed Instance supports backward compatibility to SQL Server 2008 databases. Direct migration from
SQL Server 2005 database servers is supported, and the compatibility level for migrated SQL Server 2005
databases is updated to SQL Server 2008.
The following diagram outlines surface area compatibility in SQL Managed Instance:
Key differences between SQL Server on-premises and SQL Managed Instance
SQL Managed Instance benefits from being always-up-to-date in the cloud, which means that some features in
SQL Server may be obsolete, be retired, or have alternatives. There are specific cases when tools need to
recognize that a particular feature works in a slightly different way or that the service is running in an
environment you do not fully control.
Some key differences:
High availability is built in and pre-configured using technology similar to Always On availability groups.
There are only automated backups and point-in-time restore. Customers can initiate copy-only backups that
do not interfere with the automatic backup chain.
Specifying full physical paths is unsupported, so all corresponding scenarios have to be supported
differently: RESTORE DB does not support WITH MOVE, CREATE DB doesn't allow physical paths, BULK
INSERT works with Azure blobs only, etc.
SQL Managed Instance supports Azure AD authentication and Windows Authentication for Azure Active
Directory principals (Preview).
SQL Managed Instance automatically manages XTP filegroups and files for databases containing In-Memory
OLTP objects.
SQL Managed Instance supports SQL Server Integration Services (SSIS) and can host an SSIS catalog
(SSISDB) that stores SSIS packages, but they are executed on a managed Azure-SSIS Integration Runtime (IR)
in Azure Data Factory. See Create Azure-SSIS IR in Data Factory. To compare the SSIS features, see Compare
SQL Database to SQL Managed Instance.
SQL Managed Instance supports connectivity only through the TCP protocol. It does not support connectivity
through named pipes.
Administration features
SQL Managed Instance enables system administrators to spend less time on administrative tasks because the
service either performs them for you or greatly simplifies those tasks. For example, OS/RDBMS installation and
patching, dynamic instance resizing and configuration, backups, database replication (including system
databases), high availability configuration, and configuration of health and performance monitoring data
streams.
For more information, see a list of supported and unsupported SQL Managed Instance features, and T-SQL
differences between SQL Managed Instance and SQL Server.
Programmatically identify a managed instance
The following table shows several properties, accessible through Transact-SQL, that you can use to detect that
your application is working with SQL Managed Instance and retrieve important properties.

P RO P ERT Y VA L UE C O M M EN T

@@VERSION Microsoft SQL Azure (RTM) - This value is same as in SQL Database.
12.0.2000.8 2018-03-07 Copyright (C) This does not indicate SQL engine
2018 Microsoft Corporation. version 12 (SQL Server 2014). SQL
Managed Instance always runs the
latest stable SQL engine version, which
is equal to or higher than latest
available RTM version of SQL Server.

SERVERPROPERTY ('Edition') SQL Azure This value is same as in SQL Database.

SERVERPROPERTY('EngineEdition') 8 This value uniquely identifies a


managed instance.

@@SERVERNAME , Full instance DNS name in the Example: my-managed-


SERVERPROPERTY ('ServerName') following format: <instanceName> . instance.wcus17662feb9ce98.database
<dnsPrefix> .database.windows.net, .windows.net
where <instanceName> is name
provided by the customer, while
<dnsPrefix> is autogenerated part
of the name guaranteeing global DNS
name uniqueness
("wcus17662feb9ce98", for example)

Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Database pricing.
What's new in Azure SQL Managed Instance?
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article summarizes the documentation changes associated with new features and improvements in the
recent releases of Azure SQL Managed Instance. To learn more about Azure SQL Managed Instance, see the
overview.

Preview
The following table lists the features of Azure SQL Managed Instance that are currently in preview:

F EAT URE DETA IL S

16 TB support in Business Critical Support for allocation up to 16 TB of space on SQL


Managed Instance in the Business Critical service tier using
the new memory optimized premium-series hardware.

Data virtualization Join locally stored relational data with data queried from
external data sources, such as Azure Data Lake Storage Gen2
or Azure Blob Storage.

Endpoint policies Configure which Azure Storage accounts can be accessed


from a SQL Managed Instance subnet. Grants an extra layer
of protection against inadvertent or malicious data
exfiltration.

Instance pools A convenient and cost-efficient way to migrate smaller SQL


Server instances to the cloud.

Managed Instance link Online replication of SQL Server databases hosted anywhere
to Azure SQL Managed Instance.

Maintenance window advance notifications Advance notifications (preview) for databases configured to
use a non-default maintenance window. Advance
notifications are in preview for Azure SQL Managed Instance.

Memory optimized premium-series hardware Deploy your SQL Managed Instance to the new memory
optimized premium-series hardware to take advantage of
the latest Intel Ice Lake CPUs. Memory optimized hardware
offers higher memory to vCore ratio.

Migrate with Log Replay Service Migrate databases from SQL Server to SQL Managed
Instance by using Log Replay Service.

Premium-series hardware Deploy your SQL Managed Instance to the new premium-
series hardware to take advantage of the latest Intel Ice Lake
CPUs.

Query Store hints Use query hints to optimize your query execution via the
OPTION clause.
F EAT URE DETA IL S

SDK-style SQL project Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. SDK-style SQL projects are especially advantageous
for applications shipped through pipelines or built in cross-
platform environments.

Service Broker cross-instance message exchange Support for cross-instance message exchange using Service
Broker on Azure SQL Managed Instance.

SQL Database Projects extension An extension to develop databases for Azure SQL Database
with Azure Data Studio and VS Code. A SQL project is a local
representation of SQL objects that comprise the schema for
a single database, such as tables, stored procedures, or
functions.

SQL Insights SQL Insights (preview) is a comprehensive solution for


monitoring any product in the Azure SQL family. SQL
Insights uses dynamic management views to expose the
data you need to monitor health, diagnose problems, and
tune performance.

Transactional Replication Replicate the changes from your tables into other databases
in SQL Managed Instance, SQL Database, or SQL Server. Or
update your tables when some rows are changed in other
instances of SQL Managed Instance or SQL Server. For
information, see Configure replication in Azure SQL
Managed Instance.

Threat detection Threat detection notifies you of security threats detected to


your database.

Windows Auth for Azure Active Directory principals Kerberos authentication for Azure Active Directory (Azure
AD) enables Windows Authentication access to Azure SQL
Managed Instance.

General availability (GA)


The following table lists the features of Azure SQL Managed Instance that have transitioned from preview to
general availability (GA) within the last 12 months:

F EAT URE GA M O N T H DETA IL S

Maintenance window March 2022 The maintenance window feature


allows you to configure maintenance
schedule for your Azure SQL Managed
Instance. Maintenance window
advance notifications, however, are in
preview for Azure SQL Managed
Instance.

16 TB support in General Purpose November 2021 Support for allocation up to 16 TB of


space on SQL Managed Instance in the
General Purpose service tier.
F EAT URE GA M O N T H DETA IL S

Azure Active Directory-only November 2021 It's now possible to restrict


authentication authentication to your Azure SQL
Managed Instance only to Azure
Active Directory users.

Distributed transactions November 2021 Distributed database transactions for


Azure SQL Managed Instance allow
you to run distributed transactions
that span several databases across
instances.

Linked server - managed identity November 2021 Create a linked server with managed
Azure AD authentication identity authentication for your Azure
SQL Managed Instance.

Linked server - pass-through Azure November 2021 Create a linked server with pass-
AD authentication through Azure AD authentication for
your Azure SQL Managed Instance.

Long-term backup retention November 2021 Store full backups for a specific
database with configured redundancy
for up to 10 years in Azure Blob
storage, restoring the database as a
new database.

Move instance to different subnet November 2021 Move SQL Managed Instance to a
different subnet using the Azure
portal, Azure PowerShell or the Azure
CLI.

Documentation changes
Learn about significant changes to the Azure SQL Managed Instance documentation.
May 2022
C H A N GES DETA IL S

SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. This feature is currently in preview. To learn more, see
SDK-style SQL projects.

JavaScript & Python bindings Support for JavaScript and Python SQL bindings for Azure
Functions is currently in preview. See Azure SQL bindings for
Azure Functions to learn more.

March 2022
C H A N GES DETA IL S

Data vir tualization preview It's now possible to query data in external sources such as
Azure Data Lake Storage Gen2 or Azure Blob Storage, joining
it with locally stored relational data. This feature is currently
in preview. To learn more, see Data virtualization.
C H A N GES DETA IL S

Managed Instance link guidance We've published a number of guides for using the Managed
Instance link feature, including how to prepare your
environment, configure replication by using SSMS, configure
replication via scripts, fail over your database by using SSMS,
fail over your database via scripts and some best practices
when using the link feature (currently in preview).

Maintenance window GA, advance notifications The maintenance window feature is now generally available,
preview allowing you to configure a maintenance schedule for your
Azure SQL Managed Instance. It's also possible to receive
advance notifications for planned maintenance events, which
is currently in preview. Review Maintenance window advance
notifications (preview) to learn more.

Windows Auth for Azure Active Director y principals Windows Authentication for managed instances empowers
preview customers to move existing services to the cloud while
maintaining a seamless user experience, and provides the
basis for infrastructure modernization. Learn more in
Windows Authentication for Azure Active Directory
principals on Azure SQL Managed Instance.

2021
C H A N GES DETA IL S

16 TB suppor t for Business Critical preview The Business Critical service tier of SQL Managed Instance
now provides increased maximum instance storage capacity
of up to 16 TB with the new premium-series and memory
optimized premium-series hardware, which are currently in
preview. See resource limits to learn more.

16 TB suppor t for General Purpose GA Deploying a 16 TB instance to the General Purpose service
tier is now generally available. See resource limits to learn
more.

Azure AD-only authentication GA Restricting authentication to your Azure SQL Managed


Instance only to Azure Active Directory users is now
generally available. To learn more, see Azure AD-only
authentication.

Distributed transactions GA The ability to execute distributed transactions across


managed instances is now generally available. See
Distributed transactions to learn more.

Endpoint policies preview It's now possible to configure an endpoint policy to restrict
access from a SQL Managed Instance subnet to an Azure
Storage account. This grants an extra layer of protection
against inadvertent or malicious data exfiltration. See
Endpoint policies to learn more.

Link feature preview Use the link feature for SQL Managed Instance to replicate
data from your SQL Server hosted anywhere to Azure SQL
Managed Instance, leveraging the benefits of Azure without
moving your data to Azure, to offload your workloads, for
disaster recovery, or to migrate to the cloud. See the Link
feature for SQL Managed Instance to learn more. The link
feature is currently in limited public preview.
C H A N GES DETA IL S

Long-term backup retention GA Storing full backups for a specific database with configured
redundancy for up to 10 years in Azure Blob storage is now
generally available. To learn more, see Long-term backup
retention.

Move instance to different subnet GA It's now possible to move your SQL Managed Instance to a
different subnet. See Move instance to different subnet to
learn more.

New hardware preview There are now two new hardware configurations for SQL
Managed Instance: premium-series, and a memory
optimized premium-series. Both offerings take advantage of
a new hardware powered by the latest Intel Ice Lake CPUs,
and offer a higher memory to vCore ratio to support your
most resource demanding database applications. As part of
this announcement, the Gen5 hardware has been renamed
to standard-series. The two new premium hardware
offerings are currently in preview. See resource limits to learn
more.

Split what's new The previously combined What's new article has been split
by product - What's new in SQL Database and What's new in
SQL Managed Instance, making it easier to identify what
features are currently in preview, generally available, and
significant documentation changes. Additionally, the Known
Issues in SQL Managed Instance content has moved to its
own page.

16 TB suppor t for General Purpose preview Support has been added for allocation of up to 16 TB of
space for SQL Managed Instance in the General Purpose
service tier. See resource limits to learn more. This instance
offer is currently in preview.

Parallel backup It's now possible to take backups in parallel for SQL
Managed Instance in the General Purpose tier, enabling
faster backups. See the Parallel backup for better
performance blog entry to learn more.

Azure AD-only authentication preview It's now possible to restrict authentication to your Azure SQL
Managed Instance only to Azure Active Directory users. This
feature is currently in preview. To learn more, see Azure AD-
only authentication.

Resource Health monitor Use Resource Health to monitor the health status of your
Azure SQL Managed Instance. See Resource health to learn
more.

Granular permissions for data masking GA Granular permissions for dynamic data masking for Azure
SQL Managed Instance is now generally available (GA). To
learn more, see Dynamic data masking.

User-defined routes (UDR) tables Service-aided subnet configuration for Azure SQL Managed
Instance now makes use of service tags for user-defined
routes (UDR) tables. See the connectivity architecture to
learn more.
C H A N GES DETA IL S

Audit management operations The ability to audit SQL Managed Instance operations is now
generally available (GA).

Log Replay Ser vice It's now possible to migrate databases from SQL Server to
Azure SQL Managed Instance using the Log Replay Service.
To learn more, see Migrate with Log Replay Service. This
feature is currently in preview.

Long-term backup retention Support for Long-term backup retention up to 10 years on


Azure SQL Managed Instance. To learn more, see Long-term
backup retention

Machine Learning Ser vices GA The Machine Learning Services for Azure SQL Managed
Instance are now generally available (GA). To learn more, see
Machine Learning Services for SQL Managed Instance.

Maintenance window The maintenance window feature allows you to configure a


maintenance schedule for your Azure SQL Managed
Instance. To learn more, see maintenance window.

Ser vice Broker message exchange The Service Broker component of Azure SQL Managed
Instance allows you to compose your applications from
independent, self-contained services, by providing native
support for reliable and secure message exchange between
the databases attached to the service. Currently in preview.
To learn more, see Service Broker.

SQL Insights (preview) SQL Insights (preview) is a comprehensive solution for


monitoring any product in the Azure SQL family. SQL
Insights uses dynamic management views to expose the
data you need to monitor health, diagnose problems, and
tune performance. To learn more, see Azure Monitor SQL
Insights (preview).

2020
The following changes were added to SQL Managed Instance and the documentation in 2020:

C H A N GES DETA IL S

Audit suppor t operations The auditing of Microsoft support operations capability


enables you to audit Microsoft support operations when
you need to access your servers and/or databases during a
support request to your audit logs destination (Preview). To
learn more, see Audit support operations.

Elastic transactions Elastic transactions allow for distributed database


transactions spanning multiple databases across Azure SQL
Database and Azure SQL Managed Instance. Elastic
transactions have been added to enable frictionless
migration of existing applications, as well as development of
modern multi-tenant applications relying on vertically or
horizontally partitioned database architecture (Preview). To
learn more, see Distributed transactions.
C H A N GES DETA IL S

Configurable backup storage redundancy It's now possible to configure Locally redundant storage
(LRS) and zone-redundant storage (ZRS) options for backup
storage redundancy, providing more flexibility and choice. To
learn more, see Configure backup storage redundancy.

TDE-encr ypted backup performance improvements It's now possible to set the point-in-time restore (PITR)
backup retention period, and automated compression of
backups encrypted with transparent data encryption (TDE)
are now 30 percent more efficient in consuming backup
storage space, saving costs for the end user. See Change
PITR to learn more.

Azure AD authentication improvements Automate user creation using Azure AD applications and
create individual Azure AD guest users (preview). To learn
more, see Directory readers in Azure AD

Global VNet peering suppor t Global virtual network peering support has been added to
SQL Managed Instance, improving the geo-replication
experience. See geo-replication between managed instances.

Hosting SSRS catalog databases SQL Managed Instance can now host catalog databases of
SQL Server Reporting Services (SSRS) for versions 2017 and
newer.

Major performance improvements Introducing improvements to SQL Managed Instance


performance, including improved transaction log write
throughput, improved data and log IOPS for Business
Critical instances, and improved TempDB performance. See
the improved performance tech community blog to learn
more.

Enhanced management experience Using the new OPERATIONS API, it's now possible to check
the progress of long-running instance operations. To learn
more, see Management operations.

Machine learning suppor t Machine Learning Services with support for R and Python
languages now include preview support on Azure SQL
Managed Instance (Preview). To learn more, see Machine
learning with SQL Managed Instance.

User-initiated failover User-initiated failover is now generally available, providing


you with the capability to manually initiate an automatic
failover using PowerShell, CLI commands, and API calls,
improving application resiliency. To learn more, see, testing
resiliency.

Known issues
The known issues content has moved to a dedicated known issues in SQL Managed Instance article.

Contribute to content
To contribute to the Azure SQL documentation, see the Docs contributor guide.
Overview of Azure SQL Managed Instance resource
limits
7/12/2022 • 18 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article provides an overview of the technical characteristics and resource limits for Azure SQL Managed
Instance, and provides information about how to request an increase to these limits.

NOTE
For differences in supported features and T-SQL statements see Feature differences and T-SQL statement support. For
general differences between service tiers for Azure SQL Database and SQL Managed Instance review General Purpose and
Business Critical service tiers.

Hardware configuration characteristics


SQL Managed Instance has characteristics and resource limits that depend on the underlying infrastructure and
architecture. SQL Managed Instance can be deployed on multiple hardware configurations.

NOTE
The Gen5 hardware has been renamed to the standard-series (Gen5) . We are introducing two new hardware
configurations in limited preview: premium-series and memor y optimized premium-series .

For information on previously available hardware, see Previously available hardware later in this article.
Hardware configurations have different characteristics, as described in the following table:

M EM O RY O P T IM IZ ED
P REM IUM - SERIES P REM IUM - SERIES
STA N DA RD- SERIES ( GEN 5) ( P REVIEW ) ( P REVIEW )

CPU Intel® E5-2673 v4 Intel® 8370C (Ice Lake) Intel® 8370C (Ice Lake)
(Broadwell) 2.3 GHz, 2.8 GHz processors 2.8 GHz processors
Intel® SP-8160 (Skylake),
and Intel® 8272CL
(Cascade Lake) 2.5 GHz
processors

Number of vCores 4-80 vCores 4-80 vCores 4-64 vCores


vCore=1 LP (hyper-thread)

Max memor y 5.1 GB per vCore 7 GB per vCore 13.6 GB per vCore
(memor y/vCore ratio) Add more vCores to get
more memory.

Max In-Memor y OLTP Instance limit: 0.8 - 1.65 GB Instance limit: 1.1 - 2.3 GB Instance limit: 2.2 - 4.5 GB
memor y per vCore per vCore per vCore
M EM O RY O P T IM IZ ED
P REM IUM - SERIES P REM IUM - SERIES
STA N DA RD- SERIES ( GEN 5) ( P REVIEW ) ( P REVIEW )

Max instance reser ved General Purpose: up to General Purpose: up to General Purpose: up to
storage * 16 TB 16 TB 16 TB
Business Critical: up to 4 Business Critical: up to Business Critical: up to
TB 5.5 TB 16 TB

* Dependent on the number of vCores.

NOTE
If your workload requires storage sizes greater than the available resource limits for Azure SQL Managed Instance,
consider the Azure SQL Database Hyperscale service tier.

Regional support for premium-series hardware (preview)


Support for the premium-series hardware (public preview) is currently available only in these specific regions:

M EM O RY O P T IM IZ ED P REM IUM -
REGIO N P REM IUM - SERIES SERIES

Australia Central Yes

Australia East Yes

Canada Central Yes

Canada East Yes

Central US Yes Yes

East US Yes Yes

Germany West Central Yes Yes

Japan East Yes

Japan West Yes

Korea Central Yes

North Central US Yes Yes

North Europe Yes Yes

Norway East Yes

South Africa West Yes

South Central US Yes Yes

Southeast Asia Yes


M EM O RY O P T IM IZ ED P REM IUM -
REGIO N P REM IUM - SERIES SERIES

Sweden Central Yes

Switzerland North Yes

Switzerland West Yes

UAE North Yes

UK South Yes Yes

UK West Yes

West Central US Yes

West Europe Yes Yes

West US Yes

West US 2 Yes Yes

West US 3 Yes Yes

In-memory OLTP available space


The amount of In-memory OLTP space in Business Critical service tier depends on the number of vCores and
hardware configuration. The following table lists the limits of memory that can be used for In-memory OLTP
objects.

M EM O RY O P T IM IZ ED
VC O RES STA N DA RD- SERIES ( GEN 5) P REM IUM - SERIES P REM IUM - SERIES

4 vCores 3.14 GB 4.39 GB 8.79 GB

8 vCores 6.28 GB 8.79 GB 22.06 GB

16 vCores 15.77 GB 22.06 GB 57.58 GB

24 vCores 25.25 GB 35.34 GB 93.09 GB

32 vCores 37.94 GB 53.09 GB 128.61 GB

40 vCores 52.23 GB 73.09 GB 164.13 GB

64 vCores 99.9 GB 139.82 GB 288.61 GB

80 vCores 131.68 GB 184.30 GB N/A

Service tier characteristics


SQL Managed Instance has two service tiers: General Purpose and Business Critical.
IMPORTANT
The Business Critical service tier provides an additional built-in copy of the SQL Managed Instance (secondary replica) that
can be used for read-only workload. If you can separate read-write queries and read-only/analytic/reporting queries, you
are getting twice the vCores and memory for the same price. The secondary replica might lag a few seconds behind the
primary instance, so it is designed to offload reporting/analytic workloads that don't need exact current state of data. In
the table below, read-only queries are the queries that are executed on secondary replica.

F EAT URE GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Number of vCores* 4, 8, 16, 24, 32, 40, 64, 80 Standard-series (Gen5) : 4, 8, 16,
24, 32, 40, 64, 80
Premium-series : 4, 8, 16, 24, 32, 40,
64, 80
Memor y optimized premium-
series : 4, 8, 16, 24, 32, 40, 64
*Same number of vCores is dedicated
for read-only queries.

Max memory Standard-series (Gen5) : 20.4 GB - Standard-series (Gen5) : 20.4 GB -


408 GB (5.1 GB/vCore) 408 GB (5.1 GB/vCore) on each replica
Premium-series : 28 GB - 560 GB (7 Premium-series : 28 GB - 560 GB (7
GB/vCore) GB/vCore) on each replica
Memor y optimized premium- Memor y optimized premium-
series : 54.4 GB - 870.4 GB (13.6 series : 54.4 GB - 870.4 GB (13.6
GB/vCore) GB/vCore) on each replica

Max instance storage size (reserved) - 2 TB for 4 vCores Standard-series (Gen5) :


- 8 TB for 8 vCores - 1 TB for 4, 8, 16 vCores
- 16 TB for other sizes - 2 TB for 24 vCores
- 4 TB for 32, 40, 64, 80 vCores
Premium-series :
- 1 TB for 4, 8 vCores
- 2 TB for 16, 24 vCores
- 4 TB for 32 vCores
- 5.5 TB for 40, 64, 80 vCores
Memor y optimized premium-
series :
- 1 TB for 4, 8 vCores
- 2 TB for 16, 24 vCores
- 4 TB for 32 vCores
- 5.5 TB for 40 vCores
- 16 TB for 64 vCores

Max database size Up to currently available instance size Up to currently available instance size
(depending on the number of vCores). (depending on the number of vCores).

Max tempDB size Limited to 24 GB/vCore (96 - 1,920 Up to currently available instance
GB) and currently available instance storage size.
storage size.
Add more vCores to get more TempDB
space.
Log file size is limited to 120 GB.

Max number of databases per instance 100 user databases, unless the 100 user databases, unless the
instance storage size limit has been instance storage size limit has been
reached. reached.
F EAT URE GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Max number of database files per Up to 280, unless the instance storage 32,767 files per database, unless the
instance size or Azure Premium Disk storage instance storage size limit has been
allocation space limit has been reached.
reached.

Max data file size Maximum size of each data file is 8 TB. Up to currently available instance size
Use at least two data files for (depending on the number of vCores).
databases larger than 8 TB.

Max log file size Limited to 2 TB and currently available Limited to 2 TB and currently available
instance storage size. instance storage size.

Data/Log IOPS (approximate) 500 - 7500 per file 16 K - 320 K (4000 IOPS/vCore)
*Increase file size to get more IOPS Add more vCores to get better IO
performance.

Log write throughput limit (per 3 MB/s per vCore 4 MB/s per vCore
instance) Max 120 MB/s per instance Max 96 MB/s
22 - 65 MB/s per DB (depending on
log file size)
*Increase the file size to get better IO
performance

Data throughput (approximate) 100 - 250 MB/s per file Not limited.
*Increase the file size to get better IO
performance

Storage IO latency (approximate) 5-10 ms 1-2 ms

In-memory OLTP Not supported Available, size depends on number of


vCore

Max sessions 30000 30000

Max concurrent workers 105 * number of vCores + 800 105 * number of vCores + 800

Read-only replicas 0 1 (included in price)

Compute isolation Not supported as General Purpose Standard-series (Gen5) :


instances may share physical hardware Supported for 40, 64, 80 vCores
with other instances Premium-series : Supported for 64,
80 vCores
Memor y optimized premium-
series : Supported for 64 vCores

A few additional considerations:


Currently available instance storage size is the difference between reserved instance size and the used
storage space.
Both data and log file size in the user and system databases are included in the instance storage size that is
compared with the max storage size limit. Use the sys.master_files system view to determine the total used
space by databases. Error logs are not persisted and not included in the size. Backups are not included in
storage size.
Throughput and IOPS in the General Purpose tier also depend on the file size that is not explicitly limited by
the SQL Managed Instance. You can create another readable replica in a different Azure region using auto-
failover groups
Max instance IOPS depend on the file layout and distribution of workload. As an example, if you create 7 x 1
TB files with max 5 K IOPS each and seven small files (smaller than 128 GB) with 500 IOPS each, you can get
38500 IOPS per instance (7x5000+7x500) if your workload can use all files. Note that some IOPS are also
used for auto-backups.
Find more information about the resource limits in SQL Managed Instance pools in this article.
Data and log storage
The following factors affect the amount of storage used for data and log files, and apply to General Purpose and
Business Critical tiers.
In the General Purpose service tier, tempdb uses local SSD storage, and this storage cost is included in the
vCore price.
In the Business Critical service tier, tempdb shares local SSD storage with data and log files, and tempdb
storage cost is included in the vCore price.
The maximum storage size for a SQL Managed Instance must be specified in multiples of 32 GB.

IMPORTANT
In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a
managed instance.

To monitor total consumed instance storage size for SQL Managed Instance, use the storage_space_used_mb
metric. To monitor the current allocated and used storage size of individual data and log files in a database using
T-SQL, use the sys.database_files view and the FILEPROPERTY(... , 'SpaceUsed') function.

TIP
Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see
Manage file space in Azure SQL Database.

Backups and storage


Storage for database backups is allocated to support the point-in-time restore (PITR) and long-term retention
(LTR) capabilities of SQL Managed Instance. This storage is separate from data and log file storage, and is billed
separately.
PITR : In General Purpose and Business Critical tiers, individual database backups are copied to read-access
geo-redundant (RA-GRS) storage automatically. The storage size increases dynamically as new backups are
created. The storage is used by full, differential, and transaction log backups. The storage consumption
depends on the rate of change of the database and the retention period configured for backups. You can
configure a separate retention period for each database between 0 to 35 days for SQL Managed Instance. A
backup storage amount equal to the configured maximum data size is provided at no extra charge.
LTR : You also have the option to configure long-term retention of full backups for up to 10 years. If you set
up an LTR policy, these backups are stored in RA-GRS storage automatically, but you can control how often
the backups are copied. To meet different compliance requirements, you can select different retention periods
for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will
be used for LTR backups. For more information, see Long-term backup retention.
File IO characteristics in General Purpose tier
In the General Purpose service tier, every database file gets dedicated IOPS and throughput that depend on the
file size. Larger files get more IOPS and throughput. IO characteristics of database files are shown in the
following table:
>=0 A N D >128 A N D <= >0. 5 A N D >1 A N D <=2 >2 A N D <=4 >4 A N D <=8
F IL E SIZ E <=128 GIB 512 GIB <=1 T IB T IB T IB T IB

IOPS per file 500 2300 5000 7500 7500 12,500

Throughput 100 MiB/s 150 MiB/s 200 MiB/s 250 MiB/s 250 MiB/s 250 MiB/s
per file

If you notice high IO latency on some database file or you see that IOPS/throughput is reaching the limit, you
might improve performance by increasing the file size.
There is also an instance-level limit on the max log write throughput (see above for values, e.g., 22 MB/s), so you
may not be able to reach the max file throughout on the log file because you are hitting the instance throughput
limit.

Supported regions
SQL Managed Instance can be created only in supported regions. To create a SQL Managed Instance in a region
that is currently not supported, you can send a support request via the Azure portal.

Supported subscription types


SQL Managed Instance currently supports deployment only on the following types of subscriptions:
Enterprise Agreement (EA)
Pay-as-you-go
Cloud Service Provider (CSP)
Enterprise Dev/Test
Pay-As-You-Go Dev/Test
Subscriptions with monthly Azure credit for Visual Studio subscribers

Regional resource limitations


NOTE
For the latest information on region availability for subscriptions, first check select a region.

Supported subscription types can contain a limited number of resources per region. SQL Managed Instance has
two default limits per Azure region (that can be increased on-demand by creating a special support request in
the Azure portal depending on a type of subscription type:
Subnet limit : The maximum number of subnets where instances of SQL Managed Instance are deployed in
a single region.
vCore unit limit : The maximum number of vCore units that can be deployed across all instances in a single
region. One GP vCore uses one vCore unit and one BC vCore takes four vCore units. The total number of
instances is not limited as long as it is within the vCore unit limit.

NOTE
These limits are default settings and not technical limitations. The limits can be increased on-demand by creating a special
support request in the Azure portal if you need more instances in the current region. As an alternative, you can create
new instances of SQL Managed Instance in another Azure region without sending support requests.
The following table shows the default regional limits for supported subscription types (default limits can be
extended using support request described below):

M A X N UM B ER O F SQ L M A N A GED
SUB SC RIP T IO N T Y P E IN STA N C E SUB N ET S M A X N UM B ER O F VC O RE UN IT S*

CSP 16 (30 in some regions**) 960 (1440 in some regions**)

EA 16 (30 in some regions**) 960 (1440 in some regions**)

Enterprise Dev/Test 6 320

Pay-as-you-go 6 320

Pay-as-you-go Dev/Test 6 320

Azure Pass 3 64

BizSpark 3 64

BizSpark Plus 3 64

Microsoft Azure Sponsorship 3 64

Microsoft Partner Network 3 64

Visual Studio Enterprise (MPN) 3 64

Visual Studio Enterprise 3 32

Visual Studio Enterprise (BizSpark) 3 32

Visual Studio Professional 3 32

MSDN Platforms 3 32

* In planning deployments, please take into consideration that Business Critical (BC) service tier requires four (4)
times more vCore capacity than General Purpose (GP) service tier. For example: 1 GP vCore = 1 vCore unit and 1
BC vCore = 4 vCore. To simplify your consumption analysis against the default limits, summarize the vCore units
across all subnets in the region where SQL Managed Instance is deployed and compare the results with the
instance unit limits for your subscription type. Max number of vCore units limit applies to each subscription
in a region. There is no limit per individual subnets except that the sum of all vCores deployed across multiple
subnets must be lower or equal to max number of vCore units .
** Larger subnet and vCore limits are available in the following regions: Australia East, East US, East US 2, North
Europe, South Central US, Southeast Asia, UK South, West Europe, West US 2.

IMPORTANT
In case your vCore and subnet limit is 0, it means that default regional limit for your subscription type is not set. You can
also use quota increase request for getting subscription access in specific region following the same procedure - providing
required vCore and subnet values.
Request a quota increase
If you need more instances in your current regions, send a support request to extend the quota using the Azure
portal. For more information, see Request quota increases for Azure SQL Database.

Previously available hardware


This section includes details on previously available hardware. Consider moving your instance of SQL Managed
Instance to the standard-series (Gen5) hardware to experience a wider range of vCore and storage scalability,
accelerated networking, best IO performance, and minimal latency.

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

Hardware characteristics
GEN 4

Hardware Intel® E5-2673 v3 (Haswell) 2.4 GHz processors, attached


SSD vCore = 1 PP (physical core)

Number of vCores 8, 16, 24 vCores

Max memor y (memor y/core ratio) 7 GB per vCore


Add more vCores to get more memory.

Max In-Memor y OLTP memor y Instance limit: 1-1.5 GB per vCore

Max instance reser ved storage General Purpose: 8 TB


Business Critical: 1 TB

In-memory OLTP available space


IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

The amount of In-memory OLTP space in Business Critical service tier depends on the number of vCores and
hardware configuration. The following table lists limits of memory that can be used for In-memory OLTP
objects.

IN - M EM O RY O LT P SPA C E GEN 4

8 vCores 8 GB

16 vCores 20 GB

24 vCores 36 GB

Service tier characteristics

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

F EAT URE GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Number of vCores* 8, 16, 24 8, 16, 24


*Same number of vCores is dedicated
for read-only queries.

Max memory 56 GB - 168 GB (7GB/vCore) 56 GB - 168 GB (7GB/vCore)


Add more vCores to get more + additional 20.4 GB - 408 GB
memory. (5.1GB/vCore) for read-only queries.
Add more vCores to get more
memory.

Max instance storage size (reserved) 8 TB 1 TB


F EAT URE GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Max database size Up to currently available instance size Up to currently available instance size
(max 2 TB - 8 TB depending on the (max 1 TB - 4 TB depending on the
number of vCores). number of vCores).

Max tempDB size Limited to 24 GB/vCore (96 - 1,920 Up to currently available instance
GB) and currently available instance storage size.
storage size.
Add more vCores to get more TempDB
space.
Log file size is limited to 120 GB.

Max number of databases per instance 100 user databases, unless the 100 user databases, unless the
instance storage size limit has been instance storage size limit has been
reached. reached.

Max number of database files per Up to 280, unless the instance storage 32,767 files per database, unless the
instance size or Azure Premium Disk storage instance storage size limit has been
allocation space limit has been reached.
reached.

Max data file size Limited to currently available instance Limited to currently available instance
storage size (max 2 TB - 8 TB) and storage size (up to 1 TB - 4 TB).
Azure Premium Disk storage allocation
space. Use at least two data files for
databases larger than 8 TB.

Max log file size Limited to 2 TB and currently available Limited to 2 TB and currently available
instance storage size. instance storage size.

Data/Log IOPS (approximate) Up to 30-40 K IOPS per instance*, 500 16 K - 320 K (4000 IOPS/vCore)
- 7500 per file Add more vCores to get better IO
*Increase file size to get more IOPS performance.

Log write throughput limit (per 3 MB/s per vCore 4 MB/s per vCore
instance) Max 120 MB/s per instance Max 96 MB/s
22 - 65 MB/s per DB
*Increase the file size to get better IO
performance

Data throughput (approximate) 100 - 250 MB/s per file Not limited.
*Increase the file size to get better IO
performance

Storage IO latency (approximate) 5-10 ms 1-2 ms

In-memory OLTP Not supported Available, size depends on number of


vCore

Max sessions 30000 30000

Max concurrent workers 210 * number of vCores + 800 210 * vCore count + 800

Read-only replicas 0 1 (included in price)

Compute isolation not supported not supported


Next steps
For more information about SQL Managed Instance, see What is a SQL Managed Instance?.
For pricing information, see SQL Managed Instance pricing.
To learn how to create your first SQL Managed Instance, see the quickstart guide.
vCore purchasing model - Azure SQL Managed
Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article reviews the vCore purchasing model for Azure SQL Managed Instance.

Overview
A virtual core (vCore) represents a logical CPU and offers you the option to choose the physical characteristics
of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based
purchasing model gives you flexibility, control, transparency of individual resource consumption, and a
straightforward way to translate on-premises workload requirements to the cloud. This model optimizes price,
and allows you to choose compute, memory, and storage resources based on your workload needs.
In the vCore-based purchasing model, your costs depend on the choice and usage of:
Service tier
Hardware configuration
Compute resources (the number of vCores and the amount of memory)
Reserved database storage
Actual backup storage
The virtual core (vCore) purchasing model used by Azure SQL Managed Instance provides the following
benefits:
Control over hardware configuration to better match the compute and memory requirements of the
workload.
Pricing discounts for Azure Hybrid Benefit (AHB) and Reserved Instance (RI).
Greater transparency in the hardware details that power compute, helping facilitate planning for migrations
from on-premises deployments.
Higher scaling granularity with multiple compute sizes available.

Service tiers
Service tier options in the vCore purchasing model include General Purpose and Business Critical. The service
tier generally defines the storage architecture, space and I/O limits, and business continuity options related to
availability and disaster recovery.
For more details, review resource limits.

C AT EGO RY GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Best for Most business workloads. Offers Offers business applications the
budget-oriented, balanced, and highest resilience to failures by using
scalable compute and storage options. several isolated replicas, and provides
the highest I/O performance.
C AT EGO RY GEN ERA L P URP O SE B USIN ESS C RIT IC A L

Availability 1 replica, no read-scale replicas 4 replicas total, 1 read-scale replica,


2 high availability replicas (HA)

Read-only replicas 0 built-in 1 built-in, included in price


0 - 4 using geo-replication 0 - 4 using geo-replication

Pricing/billing vCore, reserved storage, and backup vCore, reserved storage, and backup
storage is charged. storage is charged.
IOPS is not charged IOPS is not charged.

Discount models Reserved instances Reserved instances


Azure Hybrid Benefit (not available on Azure Hybrid Benefit (not available on
dev/test subscriptions) dev/test subscriptions)
Enterprise and Pay-As-You-Go Enterprise and Pay-As-You-Go
Dev/Test subscriptions Dev/Test subscriptions

NOTE
For more information on the Service Level Agreement (SLA), see SLA for Azure SQL Managed Instance.

Choosing a service tier


For information on selecting a service tier for your particular workload, see the following articles:
When to choose the General Purpose service tier
When to choose the Business Critical service tier

Compute
SQL Managed Instance compute provides a specific amount of compute resources that are continuously
provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price
per hour.

Hardware configurations
Hardware configuration options in the vCore model include standard-series (Gen5), premium-series, and
memory optimized premium-series. Hardware configuration generally defines the compute and memory limits
and other characteristics that impact workload performance.
For more information on the hardware configuration specifics and limitations, see Hardware configuration
characteristics.
In the sys.dm_user_db_resource_governance dynamic management view, hardware generation for instances
using Intel® SP-8160 (Skylake) processors appears as Gen6, while hardware generation for instances using
Intel® 8272CL (Cascade Lake) appears as Gen7. The Intel® 8370C (Ice Lake) CPUs used by premium-series
and memory optimized premium-series hardware generations appear as Gen8. Resource limits for all standard-
series (Gen5) instances are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
Selecting a hardware configuration
You can select hardware configuration at the time of instance creation, or you can change hardware of an
existing instance.
To select hardware configuration when creating a SQL Managed Instance
For detailed information, see Create a SQL Managed Instance.
On the Basics tab, select the Configure database link in the Compute + storage section, and then select
desired hardware:

To change hardware of an existing SQL Managed Instance


The Azure portal
PowerShell
The Azure CLI

From the SQL Managed Instance page, select Pricing tier link placed under the Settings section
On the Pricing tier page, you will be able to change hardware as described in the previous steps.
When specifying hardware parameter in templates or scripts, hardware is provided by using its name. The
following table applies:

H A RDWA RE NAME

Standard-series (Gen5) Gen5

Premium-series G8IM

Memory optimized premium-series G8IH

Hardware availability
Gen4

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

Gen5 hardware is available in all public regions worldwide.


Standard-series (Gen5) and premium-series
Standard-series (Gen5) hardware is available in all public regions worldwide.
Premium-series and memory optimized premium-series hardware is in preview, and has limited regional
availability. For more details, see Azure SQL Managed Instance resource limits.

Next steps
To get started, see Creating a SQL Managed Instance using the Azure portal
For pricing details, see
Azure SQL Managed Instance single instance pricing page
Azure SQL Managed Instance pools pricing page
For details about the specific compute and storage sizes available in the General Purpose and Business
Critical service tiers, see vCore-based resource limits for Azure SQL Managed Instance.
Getting started with Azure SQL Managed Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance creates a database with near 100% compatibility with the latest SQL Server
(Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses
common security concerns, and a business model favorable for existing SQL Server customers.
In this article, you will find references to content that teach you how to quickly configure and create a SQL
Managed Instance and migrate your databases.

Quickstart overview
The following quickstarts enable you to quickly create a SQL Managed Instance, configure a virtual machine or
point to site VPN connection for client application, and restore a database to your new SQL Managed Instance
using a .bak file.
Configure environment
As a first step, you would need to create your first SQL Managed Instance with the network environment where
it will be placed, and enable connection from the computer or virtual machine where you are executing queries
to SQL Managed Instance. You can use the following guides:
Create a SQL Managed Instance using the Azure portal. In the Azure portal, you configure the necessary
parameters (username/password, number of cores, and max storage amount), and automatically create
the Azure network environment without the need to know about networking details and infrastructure
requirements. You just make sure that you have a subscription type that is currently allowed to create a
SQL Managed Instance. If you have your own network that you want to use or you want to customize the
network, see configure an existing virtual network for Azure SQL Managed Instance or create a virtual
network for Azure SQL Managed Instance.
A SQL Managed Instance is created in its own VNet with no public endpoint. For client application access,
you can either create a VM in the same VNet (different subnet) or create a point-to-site VPN
connection to the VNet from your client computer using one of these quickstarts:
Enable public endpoint on your SQL Managed Instance in order to access your data directly from your
environment.
Create Azure Virtual Machine in the SQL Managed Instance VNet for client application connectivity,
including SQL Server Management Studio.
Set up point-to-site VPN connection to your SQL Managed Instance from your client computer on
which you have SQL Server Management Studio and other client connectivity applications. This is
other of two options for connectivity to your SQL Managed Instance and to its VNet.

NOTE
You can also use express route or site-to-site connection from your local network, but these approaches are
out of the scope of these quickstarts.
If you change retention period from 0 (unlimited retention) to any other value, please note that retention will
only apply to logs written after retention value was changed (logs written during the period when retention
was set to unlimited are preserved, even after retention is enabled).
As an alternative to manual creation of SQL Managed Instance, you can use PowerShell, PowerShell with
Resource Manager template, or Azure CLI to script and automate this process.
Migrate your databases
After you create a SQL Managed Instance and configure access, you can start migrating your SQL Server
databases. Migration can fail if you have some unsupported features in the source database that you want to
migrate. To avoid failures and check compatibility, you can use Data Migration Assistant (DMA) to analyze your
databases on SQL Server and find any issue that could block migration to a SQL Managed Instance, such as
existence of FileStream or multiple log files. If you resolve these issues, your databases are ready to migrate to
SQL Managed Instance. Database Experimentation Assistant is another useful tool that can record your
workload on SQL Server and replay it on a SQL Managed Instance so you can determine are there going to be
any performance issues if you migrate to a SQL Managed Instance.
Once you are sure that you can migrate your database to a SQL Managed Instance, you can use the native SQL
Server restore capabilities to restore a database into a SQL Managed Instance from a .bak file. You can use this
method to migrate databases from SQL Server database engine installed on-premises or Azure Virtual
Machines. For a quickstart, see Restore from backup to a SQL Managed Instance. In this quickstart, you restore
from a .bak file stored in Azure Blob storage using the RESTORE Transact-SQL command.

TIP
To use the BACKUP Transact-SQL command to create a backup of your database in Azure Blob storage, see SQL Server
backup to URL.

These quickstarts enable you to quickly create, configure, and restore database backup to a SQL Managed
Instance. In some scenarios, you would need to customize or automate deployment of SQL Managed Instance
and the required networking environment. These scenarios will be described below.

Customize network environment


Although the VNet/subnet can be automatically configured when the instance is created using the Azure portal,
it might be good to create it before you start creating instances in SQL Managed Instance because you can
configure the parameters of VNet and subnet. The easiest way to create and configure the network environment
is to use the Azure Resource deployment template that creates and configures your network and subnet where
the instance will be placed. You just need to press the Azure Resource Manager deploy button and populate the
form with parameters.
As an alternative, you can also use this PowerShell script to automate creation of the network.
If you already have a VNet and subnet where you would like to deploy your SQL Managed Instance, you need to
make sure that your VNet and subnet satisfy the networking requirements. Use this PowerShell script to verify
that your subnet is properly configured. This script validates your network and reports any issues, telling you
what should be changed and then offers to make the necessary changes in your VNet/subnet. Run this script if
you don't want to configure your VNet/subnet manually. You can also run it after any major reconfiguration of
your network infrastructure. If you want to create and configure your own network, read connectivity
architecture and this ultimate guide for creating and configuring a SQL Managed Instance environment.

Migrate to a SQL Managed Instance


The previously-mentioned quickstarts enable you to quickly set up a SQL Managed Instance and move your
databases using the native RESTORE capability. This is a good starting point if you want to complete quick proof-
of concepts and verify that your solution can work on Managed Instance.
However, in order to migrate your production database or even dev/test databases that you want to use for
some performance test, you would need to consider using some additional techniques, such as:
Performance testing - You should measure baseline performance metrics on your source SQL Server
instance and compare them with the performance metrics on the destination SQL Managed Instance where
you have migrated the database. Learn more about the best practices for performance comparison.
Online migration - With the native RESTORE described in this article, you have to wait for the databases to be
restored (and copied to Azure Blob storage if not already stored there). This causes some downtime of your
application especially for larger databases. To move your production database, use the Data Migration
service (DMS) to migrate your database with the minimal downtime. DMS accomplishes this by
incrementally pushing the changes made in your source database to the SQL Managed Instance database
being restored. This way, you can quickly switch your application from source to target database with
minimal downtime.
Learn more about the recommended migration process.

Next steps
Find a high-level list of supported features in SQL Managed Instance here and details and known issues here.
Learn about technical characteristics of SQL Managed Instance.
Find more advanced how-to's in how to use a SQL Managed Instance.
Identify the right Azure SQL Managed Instance SKU for your on-premises database.
Quickstart: Create an Azure SQL Managed Instance
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This quickstart teaches you to create an Azure SQL Managed Instance in the Azure portal.

IMPORTANT
For limitations, see Supported regions and Supported subscription types.

Create an Azure SQL Managed Instance


To create a SQL Managed Instance, follow these steps:
Sign in to the Azure portal
If you don't have an Azure subscription, create a free account.
1. Sign in to the Azure portal.
2. Select Azure SQL on the left menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then enter Azure SQL in the search box.
3. Select +Add to open the Select SQL deployment option page. You can view additional information
about Azure SQL Managed Instance by selecting Show details on the SQL managed instances tile.
4. Select Create .

5. Use the tabs on the Create Azure SQL Managed Instance provisioning form to add required and
optional information. The following sections describe these tabs.
Basics tab
Fill out mandatory information required on the Basics tab. This is a minimum set of information required
to provision a SQL Managed Instance.
Use the table below as a reference for information required at this tab.

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Subscription Your subscription. A subscription that gives you


permission to create new resources.

Resource group A new or existing resource group. For valid resource group names, see
Naming rules and restrictions.

Managed instance name Any valid name. For valid names, see Naming rules
and restrictions.

Region The region in which you want to For information about regions, see
create the managed instance. Azure regions.

Managed instance admin login Any valid username. For valid names, see Naming rules
and restrictions. Don't use
"serveradmin" because that's a
reserved server-level role.

Password Any valid password. The password must be at least 16


characters long and meet the
defined complexity requirements.
Select Configure Managed Instance to size compute and storage resources and to review the pricing
tiers. Use the sliders or text boxes to specify the amount of storage and the number of virtual cores.
When you're finished, select Apply to save your selection.

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Ser vice Tier Select one of the options. Based on your scenario, select one of
the following options:
General Purpose : for most
production workloads, and the
default option.
Business Critical: designed
for low-latency workloads with
high resiliency to failures and
fast failovers.

For more information, review service


tiers and resource limits.

Hardware Configuration Select one of the options. Hardware configuration generally


defines the compute and memory
limits and other characteristics that
impact the performance of the
workload. Gen5 is the default.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

vCore compute model Select an option. vCores represent exact amount of


compute resources that are always
provisioned for your workload. Eight
vCores is the default.

Storage in GB Select an option. Storage size in GB, select based on


expected data size. If migrating existing
data from on-premises or on various
cloud platforms, see Migration
overview: SQL Server to SQL Managed
Instance.

Azure Hybrid Benefit Check option if applicable. For leveraging an existing license for
Azure. For more information, see Azure
Hybrid Benefit - Azure SQL Database
& SQL Managed Instance.

Backup storage redundancy Select Geo-redundant backup Storage redundancy inside Azure for
storage . backup storage. Note that this value
cannot be changed later. Geo-
redundant backup storage is default
and recommended, though Zone and
Local redundancy allow for more cost
flexibility and single region data
residency. For more information, see
Backup Storage redundancy.

To review your choices before you create a SQL Managed Instance, you can select Review + create . Or,
configure networking options by selecting Next: Networking .
Networking tab
Fill out optional information on the Networking tab. If you omit this information, the portal will apply
default settings.
Use the table below as a reference for information required at this tab.

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Vir tual network Select either Create new vir tual If a network or subnet is
network or a valid virtual network unavailable, it must be modified to
and subnet. satisfy the network requirements
before you select it as a target for
the new managed instance. For
information about the requirements
for configuring the network
environment for SQL Managed
Instance, see Configure a virtual
network for SQL Managed Instance.

Connection type Choose between a proxy and a For more information about
redirect connection type. connection types, see Azure SQL
Managed Instance connection type.

Public endpoint Select Disable . For a managed instance to be


accessible through the public data
endpoint, you need to enable this
option.
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Allow access from (if Public Select No Access The portal experience enables
endpoint is enabled) configuring a security group with a
public endpoint.

Based on your scenario, select one


of the following options:
Azure ser vices : We
recommend this option
when you're connecting from
Power BI or another
multitenant service.
Internet : Use for test
purposes when you want to
quickly spin up a managed
instance. We don't
recommend it for production
environments.
No access : This option
creates a Deny security rule.
Modify this rule to make a
managed instance accessible
through a public endpoint.

For more information on public


endpoint security, see Using Azure
SQL Managed Instance securely
with a public endpoint.

Select Review + create to review your choices before you create a managed instance. Or, configure
more custom settings by selecting Next: Additional settings .
Additional settings
Fill out optional information on the Additional settings tab. If you omit this information, the portal will
apply default settings.
Use the table below as a reference for information required at this tab.

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Collation Choose the collation that you want For information about collations, see
to use for your managed instance. If Set or change the server collation.
you migrate databases from SQL
Server, check the source collation by
using
SELECT
SERVERPROPERTY(N'Collation')
and use that value.

Time zone Select the time zone that managed For more information, see Time
instance will observe. zones.

Use as failover secondar y Select Yes . Enable this option to use the
managed instance as a failover
group secondary.

Primar y SQL Managed Instance Choose an existing primary This step will enable post-creation
(if Use as failover secondar y is managed instance that will be joined configuration of the failover group.
set to Yes ) in the same DNS zone with the For more information, see Tutorial:
managed instance you're creating. Add a managed instance to a
failover group.

Select Review + create to review your choices before you create a managed instance. Or, configure
Azure Tags by selecting Next: Tags (recommended).
Tags
Add tags to resources in your Azure Resource Manager template (ARM template). Tags help you logically
organize your resources. The tag values show up in cost reports and allow for other management
activities by tag.
Consider at least tagging your new SQL Managed Instance with the Owner tag to identify who created,
and the Environment tag to identify whether this system is Production, Development, etc. For more
information, see Develop your naming and tagging strategy for Azure resources.
Select Review + create to proceed.

Review + create
1. Select Review + create tab to review your choices before you create a managed instance.

2. Select Create to start provisioning the managed instance.

IMPORTANT
Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes
much longer than deploying into a subnet with existing managed instances. For average provisioning times, see Overview
of Azure SQL Managed Instance management operations.
Monitor deployment progress
1. Select the Notifications icon to view the status of the deployment.

2. Select Deployment in progress in the notification to open the SQL Managed Instance window and
further monitor the deployment progress.

TIP
If you closed your web browser or moved away from the deployment progress screen, you can monitor the
provisioning operation via the managed instance's Over view page, or via PowerShell or the Azure CLI. For more
information, see Monitor operations.
You can cancel the provisioning process through Azure portal, or via PowerShell or the Azure CLI or other tooling
using the REST API. See Canceling Azure SQL Managed Instance management operations.

IMPORTANT
Start of SQL Managed Instance creation could be delayed in cases when there exist other impacting operations, such
are long-running restore or scaling operations on other Managed Instances in the same subnet. To learn more, see
Management operations cross-impact.
In order to be able to get the status of managed instance creation, you need to have read permissions over the
resource group. If you don't have this permission or revoke it while the managed instance is in creation process, this
can cause SQL Managed Instance not to be visible in the list of resource group deployments.

View resources created


Upon successful deployment of a managed instance, to view resources created:
1. Open the resource group for your managed instance.
View and fine-tune network settings
To optionally fine-tune networking settings, inspect the following:
1. In the list of resources, select the route table to review the user-defined Route table (UDR) object that was
created.
2. In the route table, review the entries to route traffic from and within the SQL Managed Instance virtual
network. If you create or configure your route table manually, create these entries in the SQL Managed
Instance route table.

To change or add routes, open the Routes in the Route table settings.
3. Return to the resource group, and select the network security group (NSG) object that was created.
4. Review the inbound and outbound security rules.
To change or add rules, open the Inbound Security Rules and Outbound security rules in the
Network security group settings.

IMPORTANT
If you have configured a public endpoint for SQL Managed Instance, you need to open ports to allow network traffic
allowing connections to SQL Managed Instance from the public internet. For more information, see Configure a public
endpoint for SQL Managed Instance.

Retrieve connection details to SQL Managed Instance


To connect to SQL Managed Instance, follow these steps to retrieve the host name and fully qualified domain
name (FQDN):
1. Return to the resource group and select the SQL managed instance object that was created.
2. On the Over view tab, locate the Host property. Copy the host name to your clipboard for the managed
instance for use in the next quickstart by clicking the Copy to clipboard button.

The value copied represents a fully qualified domain name (FQDN) that can be used to connect to SQL
Managed Instance. It is similar to the following address example:
your_host_name.a1b2c3d4e5f6.database.windows.net.

Next steps
To learn about how to connect to SQL Managed Instance:
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
For a quickstart that shows how to connect to SQL Managed Instance from an Azure virtual machine, see
Configure an Azure virtual machine connection.
For a quickstart that shows how to connect to SQL Managed Instance from an on-premises client computer
by using a point-to-site connection, see Configure a point-to-site connection.
To restore an existing SQL Server database from on-premises to SQL Managed Instance:
Use the Azure Database Migration Service for migration to restore from a database backup file.
Use the T-SQL RESTORE command to restore from a database backup file.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance by using Azure SQL Analytics.
Quickstart: Create a managed instance using Azure
PowerShell
7/12/2022 • 2 minutes to read • Edit Online

In this quickstart, learn to create an instance of Azure SQL Managed Instance using Azure PowerShell.

Prerequisite
An active Azure subscription. If you don't have one, create a free account.
The latest version of Azure PowerShell.

Set variables
Creating a SQL Manged Instance requires creating several resources within Azure, and as such, the Azure
PowerShell commands rely on variables to simplify the experience. Define the variables, and then execute the
the cmdlets in each section within the same PowerShell session.

$NSnetworkModels = "Microsoft.Azure.Commands.Network.Models"
$NScollections = "System.Collections.Generic"
# The SubscriptionId in which to create these objects
$SubscriptionId = ''
# Set the resource group name and location for your managed instance
$resourceGroupName = "myResourceGroup-$(Get-Random)"
$location = "eastus2"
# Set the networking values for your managed instance
$vNetName = "myVnet-$(Get-Random)"
$vNetAddressPrefix = "10.0.0.0/16"
$miSubnetName = "myMISubnet-$(Get-Random)"
$miSubnetAddressPrefix = "10.0.0.0/24"
#Set the managed instance name for the new managed instance
$instanceName = "myMIName-$(Get-Random)"
# Set the admin login and password for your managed instance
$miAdminSqlLogin = "SqlAdmin"
$miAdminSqlPassword = "ChangeYourAdminPassword1"
# Set the managed instance service tier, compute level, and license mode
$edition = "General Purpose"
$vCores = 4
$maxStorage = 128
$computeGeneration = "Gen5"
$license = "LicenseIncluded" #"BasePrice" or LicenseIncluded if you have don't have SQL Server licence that
can be used for AHB discount

Create resource group


First, connect to Azure, set your subscription context, and create your resource group.
To do so, execute this PowerShell script:
## Connect to Azure
Connect-AzAccount

# Set subscription context


Set-AzContext -SubscriptionId $SubscriptionId

# Create a resource group


$resourceGroup = New-AzResourceGroup -Name $resourceGroupName -Location $location -Tag @{Owner="SQLDB-
Samples"}

Configure networking
After your resource group is created, configure the networking resources such as the virtual network, subnets,
network security group, and routing table. This example demonstrates the use of the Delegate subnet for
Managed Instance deployment script, which is available on GitHub as delegate-subnet.ps1.
To do so, execute this PowerShell script:

# Configure virtual network, subnets, network security group, and routing table
$virtualNetwork = New-AzVirtualNetwork `
-ResourceGroupName $resourceGroupName `
-Location $location `
-Name $vNetName `
-AddressPrefix $vNetAddressPrefix

Add-AzVirtualNetworkSubnetConfig `
-Name $miSubnetName `
-VirtualNetwork $virtualNetwork `
-AddressPrefix $miSubnetAddressPrefix |
Set-AzVirtualNetwork

$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet'

$parameters = @{
subscriptionId = $SubscriptionId
resourceGroupName = $resourceGroupName
virtualNetworkName = $vNetName
subnetName = $miSubnetName
}

Invoke-Command -ScriptBlock ([Scriptblock]::Create((iwr ($scriptUrlBase+'/delegateSubnet.ps1?t='+


[DateTime]::Now.Ticks)).Content)) -ArgumentList $parameters

$virtualNetwork = Get-AzVirtualNetwork -Name $vNetName -ResourceGroupName $resourceGroupName


$miSubnet = Get-AzVirtualNetworkSubnetConfig -Name $miSubnetName -VirtualNetwork $virtualNetwork
$miSubnetConfigId = $miSubnet.Id

Create managed instance


For added security, create a complex and randomized password for your SQL Managed Instance credential:

# Create credentials
$secpassword = ConvertTo-SecureString $miAdminSqlPassword -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($miAdminSqlLogin, $secpassword)

Then create your SQL Managed Instance:


# Create managed instance
New-AzSqlInstance -Name $instanceName `
-ResourceGroupName $resourceGroupName -Location $location -SubnetId $miSubnetConfigId
`
-AdministratorCredential $credential `
-StorageSizeInGB $maxStorage -VCore $vCores -Edition $edition `
-ComputeGeneration $computeGeneration -LicenseType $license

This operation may take some time to complete. To learn more, see Management operations.

Clean up resources
Keep the resource group, and managed instance to go on to the next steps, and learn how to connect to your
SQL Managed Instance using a client virtual machine.
When you're finished using these resources, you can delete the resource group you created, which will also
delete the server and single database within it.

# Clean up deployment
Remove-AzResourceGroup -ResourceGroupName $resourceGroupName

Next steps
After your SQL Managed Instance is created, deploy a client VM to connect to your SQL Managed Instance, and
restore a sample database.
Create client VM Restore database
Quickstart: Create an Azure SQL Managed Instance
using Bicep
7/12/2022 • 3 minutes to read • Edit Online

This quickstart focuses on the process of deploying a Bicep file to create an Azure SQL Managed Instance and
vNet. Azure SQL Managed Instance is an intelligent, fully managed, scalable cloud database, with almost 100%
feature parity with the SQL Server database engine.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.

Prerequisites
If you don't have an Azure subscription, create a free account.

Review the Bicep file


The Bicep file used in this quickstart is from Azure Quickstart Templates.

@description('Enter managed instance name.')


param managedInstanceName string

@description('Enter user name.')


param administratorLogin string

@description('Enter password.')
@secure()
param administratorLoginPassword string

@description('Enter location. If you leave this field blank resource group location would be used.')
param location string = resourceGroup().location

@description('Enter virtual network name. If you leave this field blank name will be created by the
template.')
param virtualNetworkName string = 'SQLMI-VNET'

@description('Enter virtual network address prefix.')


param addressPrefix string = '10.0.0.0/16'

@description('Enter subnet name.')


param subnetName string = 'ManagedInstance'

@description('Enter subnet address prefix.')


param subnetPrefix string = '10.0.0.0/24'

@description('Enter sku name.')


@allowed([
'GP_Gen5'
'BC_Gen5'
])
param skuName string = 'GP_Gen5'

@description('Enter number of vCores.')


@allowed([
8
16
24
24
32
40
64
80
])
param vCores int = 16

@description('Enter storage size.')


@minValue(32)
@maxValue(8192)
param storageSizeInGB int = 256

@description('Enter license type.')


@allowed([
'BasePrice'
'LicenseIncluded'
])
param licenseType string = 'LicenseIncluded'

var networkSecurityGroupName = 'SQLMI-${managedInstanceName}-NSG'


var routeTableName = 'SQLMI-${managedInstanceName}-Route-Table'

resource networkSecurityGroup 'Microsoft.Network/networkSecurityGroups@2021-08-01' = {


name: networkSecurityGroupName
location: location
properties: {
securityRules: [
{
name: 'allow_tds_inbound'
properties: {
description: 'Allow access to data'
protocol: 'Tcp'
sourcePortRange: '*'
destinationPortRange: '1433'
sourceAddressPrefix: 'VirtualNetwork'
destinationAddressPrefix: '*'
access: 'Allow'
priority: 1000
direction: 'Inbound'
}
}
{
name: 'allow_redirect_inbound'
properties: {
description: 'Allow inbound redirect traffic to Managed Instance inside the virtual network'
protocol: 'Tcp'
sourcePortRange: '*'
destinationPortRange: '11000-11999'
sourceAddressPrefix: 'VirtualNetwork'
destinationAddressPrefix: '*'
access: 'Allow'
priority: 1100
direction: 'Inbound'
}
}
{
name: 'deny_all_inbound'
properties: {
description: 'Deny all other inbound traffic'
protocol: '*'
sourcePortRange: '*'
destinationPortRange: '*'
sourceAddressPrefix: '*'
destinationAddressPrefix: '*'
access: 'Deny'
priority: 4096
direction: 'Inbound'
}
}
}
{
name: 'deny_all_outbound'
properties: {
description: 'Deny all other outbound traffic'
protocol: '*'
sourcePortRange: '*'
destinationPortRange: '*'
sourceAddressPrefix: '*'
destinationAddressPrefix: '*'
access: 'Deny'
priority: 4096
direction: 'Outbound'
}
}
]
}
}

resource routeTable 'Microsoft.Network/routeTables@2021-08-01' = {


name: routeTableName
location: location
properties: {
disableBgpRoutePropagation: false
}
}

resource virtualNetwork 'Microsoft.Network/virtualNetworks@2021-08-01' = {


name: virtualNetworkName
location: location
properties: {
addressSpace: {
addressPrefixes: [
addressPrefix
]
}
subnets: [
{
name: subnetName
properties: {
addressPrefix: subnetPrefix
routeTable: {
id: routeTable.id
}
networkSecurityGroup: {
id: networkSecurityGroup.id
}
delegations: [
{
name: 'managedInstanceDelegation'
properties: {
serviceName: 'Microsoft.Sql/managedInstances'
}
}
]
}
}
]
}
}

resource managedInstance 'Microsoft.Sql/managedInstances@2021-11-01-preview' = {


name: managedInstanceName
location: location
sku: {
name: skuName
}
identity: {
type: 'SystemAssigned'
}
}
dependsOn: [
virtualNetwork
]
properties: {
administratorLogin: administratorLogin
administratorLoginPassword: administratorLoginPassword
subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', virtualNetworkName, subnetName)
storageSizeInGB: storageSizeInGB
vCores: vCores
licenseType: licenseType
}
}

These resources are defined in the Bicep file:


Microsoft.Network/networkSecurityGroups
Microsoft.Network/routeTables
Microsoft.Network/vir tualNetworks
Microsoft.Sql/managedinstances

Deploy the Bicep file


1. Save the Bicep file as main.bicep to your local computer.
2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.

CLI
PowerShell

az group create --name exampleRG --location eastus


az deployment group create --resource-group exampleRG --template-file main.bicep --parameters
managedInstanceName=<instance-name> administratorLogin=<admin-login>

NOTE
Replace <instance-name> with the name of the managed instance. Replace <admin-login> with the administrator
username. You'll be prompted to enter administratorLoginPassword .

When the deployment finishes, you should see a message indicating the deployment succeeded.

Review deployed resources


Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.

CLI
PowerShell

az resource list --resource-group exampleRG

Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and
its resources.
CLI
PowerShell

az group delete --name exampleRG

Next steps
Configure an Azure VM to connect to Azure SQL Managed Instance
Quickstart: Create an Azure SQL Managed Instance
using an ARM template
7/12/2022 • 4 minutes to read • Edit Online

This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to
create an Azure SQL Managed Instance and vNet. Azure SQL Managed Instance is an intelligent, fully managed,
scalable cloud database, with almost 100% feature parity with the SQL Server database engine.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.

Prerequisites
If you don't have an Azure subscription, create a free account.

Review the template


The template used in this quickstart is from Azure Quickstart Templates.

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.6.1.6515",
"templateHash": "13317687096436273875"
}
},
"parameters": {
"managedInstanceName": {
"type": "string",
"metadata": {
"description": "Enter managed instance name."
}
},
"administratorLogin": {
"type": "string",
"metadata": {
"description": "Enter user name."
}
},
"administratorLoginPassword": {
"type": "secureString",
"metadata": {
"description": "Enter password."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Enter location. If you leave this field blank resource group location would be
used."
}
},
"virtualNetworkName": {
"type": "string",
"defaultValue": "SQLMI-VNET",
"metadata": {
"description": "Enter virtual network name. If you leave this field blank name will be created by
the template."
}
},
"addressPrefix": {
"type": "string",
"defaultValue": "10.0.0.0/16",
"metadata": {
"description": "Enter virtual network address prefix."
}
},
"subnetName": {
"type": "string",
"defaultValue": "ManagedInstance",
"metadata": {
"description": "Enter subnet name."
}
},
"subnetPrefix": {
"type": "string",
"defaultValue": "10.0.0.0/24",
"metadata": {
"description": "Enter subnet address prefix."
}
},
"skuName": {
"type": "string",
"defaultValue": "GP_Gen5",
"allowedValues": [
"GP_Gen5",
"BC_Gen5"
],
"metadata": {
"description": "Enter sku name."
}
},
"vCores": {
"type": "int",
"defaultValue": 16,
"allowedValues": [
8,
16,
24,
32,
40,
64,
80
],
"metadata": {
"description": "Enter number of vCores."
}
},
"storageSizeInGB": {
"type": "int",
"defaultValue": 256,
"maxValue": 8192,
"minValue": 32,
"metadata": {
"description": "Enter storage size."
}
},
"licenseType": {
"type": "string",
"defaultValue": "LicenseIncluded",
"allowedValues": [
"BasePrice",
"LicenseIncluded"
],
"metadata": {
"description": "Enter license type."
}
}
},
"variables": {
"networkSecurityGroupName": "[format('SQLMI-{0}-NSG', parameters('managedInstanceName'))]",
"routeTableName": "[format('SQLMI-{0}-Route-Table', parameters('managedInstanceName'))]"
},
"resources": [
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2021-08-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "allow_tds_inbound",
"properties": {
"description": "Allow access to data",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "1433",
"sourceAddressPrefix": "VirtualNetwork",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
},
{
"name": "allow_redirect_inbound",
"properties": {
"description": "Allow inbound redirect traffic to Managed Instance inside the virtual
network",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "11000-11999",
"sourceAddressPrefix": "VirtualNetwork",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1100,
"direction": "Inbound"
}
},
{
"name": "deny_all_inbound",
"properties": {
"description": "Deny all other inbound traffic",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Deny",
"priority": 4096,
"direction": "Inbound"
}
},
{
"name": "deny_all_outbound",
"properties": {
"description": "Deny all other outbound traffic",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Deny",
"priority": 4096,
"direction": "Outbound"
}
}
]
}
},
{
"type": "Microsoft.Network/routeTables",
"apiVersion": "2021-08-01",
"name": "[variables('routeTableName')]",
"location": "[parameters('location')]",
"properties": {
"disableBgpRoutePropagation": false
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2021-08-01",
"name": "[parameters('virtualNetworkName')]",
"location": "[parameters('location')]",
"properties": {
"addressSpace": {
"addressPrefixes": [
"[parameters('addressPrefix')]"
]
},
"subnets": [
{
"name": "[parameters('subnetName')]",
"properties": {
"addressPrefix": "[parameters('subnetPrefix')]",
"routeTable": {
"id": "[resourceId('Microsoft.Network/routeTables', variables('routeTableName'))]"
},
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]"
},
"delegations": [
{
"name": "managedInstanceDelegation",
"properties": {
"serviceName": "Microsoft.Sql/managedInstances"
}
}
]
}
}
]
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]",
"[resourceId('Microsoft.Network/routeTables', variables('routeTableName'))]"
]
},
{
"type": "Microsoft.Sql/managedInstances",
"apiVersion": "2021-11-01-preview",
"apiVersion": "2021-11-01-preview",
"name": "[parameters('managedInstanceName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('skuName')]"
},
"identity": {
"type": "SystemAssigned"
},
"properties": {
"administratorLogin": "[parameters('administratorLogin')]",
"administratorLoginPassword": "[parameters('administratorLoginPassword')]",
"subnetId": "[resourceId('Microsoft.Network/virtualNetworks/subnets',
parameters('virtualNetworkName'), parameters('subnetName'))]",
"storageSizeInGB": "[parameters('storageSizeInGB')]",
"vCores": "[parameters('vCores')]",
"licenseType": "[parameters('licenseType')]"
},
"dependsOn": [
"[resourceId('Microsoft.Network/virtualNetworks', parameters('virtualNetworkName'))]"
]
}
]
}

These resources are defined in the template:


Microsoft.Network/networkSecurityGroups
Microsoft.Network/routeTables
Microsoft.Network/vir tualNetworks
Microsoft.Sql/managedinstances
More template samples can be found in Azure Quickstart Templates.

Deploy the template


Select Tr y it from the following PowerShell code block to open Azure Cloud Shell.

IMPORTANT
Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes
much longer than deploying into a subnet with existing managed instances. For average provisioning times, see SQL
Managed Instance management operations.

PowerShell
Azure CLI

$projectName = Read-Host -Prompt "Enter a project name that is used for generating resource names"
$location = Read-Host -Prompt "Enter the location (i.e. centralus)"
$templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-
templates/master/quickstarts/microsoft.sql/sqlmi-new-vnet/azuredeploy.json"

$resourceGroupName = "${projectName}rg"

New-AzResourceGroup -Name $resourceGroupName -Location $location


New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri

Read-Host -Prompt "Press [ENTER] to continue ..."


Review deployed resources
Visit the Azure portal and verify the managed instance is in your selected resource group. Because creating a
managed instance can take some time, you might need to check the Deployments link on your resource
group's Over view page.
For a quickstart that shows how to connect to SQL Managed Instance from an Azure virtual machine, see
Configure an Azure virtual machine connection.
For a quickstart that shows how to connect to SQL Managed Instance from an on-premises client computer
by using a point-to-site connection, see Configure a point-to-site connection.

Clean up resources
Keep the managed instance if you want to go to the Next steps, but delete the managed instance and related
resources after completing any additional tutorials. After deleting a managed instance, see Delete a subnet after
deleting a managed instance.
To delete the resource group:

PowerShell
Azure CLI

$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"


Remove-AzResourceGroup -Name $resourceGroupName

Next steps
Configure an Azure VM to connect to Azure SQL Managed Instance
Deploy Azure SQL Managed Instance to an
instance pool
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article provides details on how to create an instance pool and deploy Azure SQL Managed Instance to it.

Instance pool operations


The following table shows the available operations related to instance pools and their availability in the Azure
portal, PowerShell, and Azure CLI.

C OMMAND A Z URE P O RTA L P O W ERSH EL L A Z URE C L I

Create an instance pool No Yes Yes

Update an instance pool No Yes Yes


(limited number of
properties)

Check an instance pool No Yes Yes


usage and properties

Delete an instance pool No Yes Yes

Create a managed instance No Yes No


inside an instance pool

Update resource usage for Yes Yes No


a managed instance

Check usage and properties Yes Yes No


for a managed instance

Delete a managed instance Yes Yes No


from the pool

Create a database in Yes Yes No


instance within the pool

Delete a database from SQL Yes Yes No


Managed Instance

PowerShell
Azure CLI

To use PowerShell, install the latest version of PowerShell Core, and follow instructions to Install the Azure
PowerShell module.
Available PowerShell commands:
C M DL ET DESC RIP T IO N

New-AzSqlInstancePool Creates a SQL Managed Instance pool.

Get-AzSqlInstancePool Returns information about an instance pool.

Set-AzSqlInstancePool Sets properties for an instance pool in SQL Managed


Instance.

Remove-AzSqlInstancePool Removes an instance pool in SQL Managed Instance.

Get-AzSqlInstancePoolUsage Returns information about SQL Managed Instance pool


usage.

For operations related to instances both inside pools and single instances, use the standard managed instance
commands, but the instance pool name property must be populated when using these commands for an
instance in a pool.

Deployment process
To deploy a managed instance into an instance pool, you must first deploy the instance pool, which is a one-time
long-running operation where the duration is the same as deploying a single instance created in an empty
subnet. After that, you can deploy a managed instance into the pool, which is a relatively fast operation that
typically takes up to five minutes. The instance pool parameter must be explicitly specified as part of this
operation.
In public preview, both actions are only supported using PowerShell and Azure Resource Manager templates.
The Azure portal experience is not currently available.
After a managed instance is deployed to a pool, you can use the Azure portal to change its properties on the
pricing tier page.

Create a virtual network with a subnet


To place multiple instance pools inside the same virtual network, see the following articles:
Determine VNet subnet size for Azure SQL Managed Instance.
Create new virtual network and subnet using the Azure portal template or follow the instructions for
preparing an existing virtual network.

Create an instance pool


After completing the previous steps, you are ready to create an instance pool.
The following restrictions apply to instance pools:
Only General Purpose and Gen5 are available in public preview.
The pool name can contain only lowercase letters, numbers and hyphens, and can't start with a hyphen.
If you want to use Azure Hybrid Benefit, it is applied at the instance pool level. You can set the license type
during pool creation or update it anytime after creation.

IMPORTANT
Deploying an instance pool is a long running operation that takes approximately 4.5 hours.
PowerShell
Azure CLI

To get network parameters:

$virtualNetwork = Get-AzVirtualNetwork -Name "miPoolVirtualNetwork" -ResourceGroupName "myResourceGroup"


$subnet = Get-AzVirtualNetworkSubnetConfig -Name "miPoolSubnet" -VirtualNetwork $virtualNetwork

To create an instance pool:

$instancePool = New-AzSqlInstancePool `
-ResourceGroupName "myResourceGroup" `
-Name "mi-pool-name" `
-SubnetId $subnet.Id `
-LicenseType "LicenseIncluded" `
-VCore 8 `
-Edition "GeneralPurpose" `
-ComputeGeneration "Gen5" `
-Location "westeurope"

IMPORTANT
Because deploying an instance pool is a long running operation, you need to wait until it completes before running any of
the following steps in this article.

Create a managed instance


After the successful deployment of the instance pool, it's time to create a managed instance inside it.
To create a managed instance, execute the following command:

$instanceOne = $instancePool | New-AzSqlInstance -Name "mi-one-name" -VCore 2 -StorageSizeInGB 256

Deploying an instance inside a pool takes a couple of minutes. After the first instance has been created,
additional instances can be created:

$instanceTwo = $instancePool | New-AzSqlInstance -Name "mi-two-name" -VCore 4 -StorageSizeInGB 512

Create a database
To create and manage databases in a managed instance that's inside a pool, use the single instance commands.
To create a database inside a managed instance:

$poolinstancedb = New-AzSqlInstanceDatabase -Name "mipooldb1" -InstanceName "poolmi-001" -ResourceGroupName


"myResourceGroup"

Get pool usage


To get a list of instances inside a pool:
$instancePool | Get-AzSqlInstance

To get pool resource usage:

$instancePool | Get-AzSqlInstancePoolUsage

To get detailed usage overview of the pool and instances inside it:

$instancePool | Get-AzSqlInstancePoolUsage –ExpandChildren

To list the databases in an instance:

$databases = Get-AzSqlInstanceDatabase -InstanceName "pool-mi-001" -ResourceGroupName "resource-group-name"

NOTE
For checking limits on number of databases per instance pool and managed instance deployed inside the pool visit
Instance pool resource limits section.

Scale
After populating a managed instance with databases, you may hit instance limits regarding storage or
performance. In that case, if pool usage has not been exceeded, you can scale your instance. Scaling a managed
instance inside a pool is an operation that takes a couple of minutes. The prerequisite for scaling is available
vCores and storage on the instance pool level.
To update the number of vCores and storage size:

$instanceOne | Set-AzSqlInstance -VCore 8 -StorageSizeInGB 512 -InstancePoolName "mi-pool-name"

To update storage size only:

$instance | Set-AzSqlInstance -StorageSizeInGB 1024 -InstancePoolName "mi-pool-name"

Connect
To connect to a managed instance in a pool, the following two steps are required:
1. Enable the public endpoint for the instance.
2. Add an inbound rule to the network security group (NSG).
After both steps are complete, you can connect to the instance by using a public endpoint address, port, and
credentials provided during instance creation.
Enable the public endpoint
Enabling the public endpoint for an instance can be done through the Azure portal or by using the following
PowerShell command:
$instanceOne | Set-AzSqlInstance -InstancePoolName "pool-mi-001" -PublicDataEndpointEnabled $true

This parameter can be set during instance creation as well.


Add an inbound rule to the network security group
This step can be done through the Azure portal or using PowerShell commands, and can be done anytime after
the subnet is prepared for the managed instance.
For details, see Allow public endpoint traffic on the network security group.

Move an existing single instance to a pool


Moving instances in and out of a pool is one of the public preview limitations. A workaround relies on point-in-
time restore of databases from an instance outside a pool to an instance that's already in a pool.
Both instances must be in the same subscription and region. Cross-region and cross-subscription restore is not
currently supported.
This process does have a period of downtime.
To move existing databases:
1. Pause workloads on the managed instance you are migrating from.
2. Generate scripts to create system databases and execute them on the instance that's inside the instance
pool.
3. Do a point-in-time restore of each database from the single instance to the instance in the pool.

$resourceGroupName = "my resource group name"


$managedInstanceName = "my managed instance name"
$databaseName = "my source database name"
$pointInTime = "2019-08-21T08:51:39.3882806Z"
$targetDatabase = "name of the new database that will be created"
$targetResourceGroupName = "resource group of instance pool"
$targetInstanceName = "pool instance name"

Restore-AzSqlInstanceDatabase -FromPointInTimeBackup `
-ResourceGroupName $resourceGroupName `
-InstanceName $managedInstanceName `
-Name $databaseName `
-PointInTime $pointInTime `
-TargetInstanceDatabaseName $targetDatabase `
-TargetResourceGroupName $targetResourceGroupName `
-TargetInstanceName $targetInstanceName

4. Point your application to the new instance and resume its workloads.
If there are multiple databases, repeat the process for each database.

Next steps
For a features and comparison list, see SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Managed Instance pricing.
Create an Azure SQL Managed Instance with a
user-assigned managed identity
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance

NOTE
If you are looking for a guide on Azure SQL Database, see Create an Azure SQL logical server using a user-assigned
managed identity

This how-to guide outlines the steps to create an Azure SQL Managed Instance with a user-assigned managed
identity. For more information on the benefits of using a user-assigned managed identity for the server identity
in Azure SQL Database, see User-assigned managed identity in Azure AD for Azure SQL.

Prerequisites
To provision a Managed Instance with a user-assigned managed identity, the SQL Managed Instance
Contributor role (or a role with greater permissions), along with an Azure RBAC role containing the following
action is required:
Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action - For example, the Managed
Identity Operator has this action.
Create a user-assigned managed identity and assign it the necessary permission to be a server or managed
instance identity. For more information, see Manage user-assigned managed identities and user-assigned
managed identity permissions for Azure SQL.
Az.Sql module 3.4 or higher is required when using PowerShell for user-assigned managed identities.
The Azure CLI 2.26.0 or higher is required to use the Azure CLI with user-assigned managed identities.
For a list of limitations and known issues with using user-assigned managed identity, see User-assigned
managed identity in Azure AD for Azure SQL

Portal
The Azure CLI
PowerShell
REST API
ARM Template

1. Browse to the Select SQL deployment option page in the Azure portal.
2. If you aren't already signed in to Azure portal, sign in when prompted.
3. Under SQL managed instances , leave Resource type set to Single instance , and select Create .
4. Fill out the mandatory information required on the Basics tab for Project details and Managed
Instance details . This is a minimum set of information required to provision a SQL Managed Instance.
For more information on the configuration options, see Quickstart: Create an Azure SQL Managed
Instance.
5. Under Authentication , select a preferred authentication model. If you're looking to only configure Azure
AD-only authentication, see our guide here.
6. Next, go through the Networking tab configuration, or leave the default settings.
7. On the Security tab, under Identity , select Configure Identities .

8. On the Identity blade, under User assigned managed identity , select Add . Select the desired
Subscription and then under User assigned managed identities select the desired user assigned
managed identity from the selected subscription. Then select the Select button.
9. Under Primar y identity , select the same user-assigned managed identity selected in the previous step.
NOTE
If the system-assigned managed identity is the primary identity, the Primar y identity field must be empty.

10. Select Apply


11. You can leave the rest of the settings default. For more information on other tabs and settings, follow the
guide in the article Quickstart: Create an Azure SQL Managed Instance.
12. Once you are done with configuring your settings, select Review + create to proceed. Select Create to
start provisioning the managed instance.

See also
User-assigned managed identity in Azure AD for Azure SQL
Create an Azure SQL logical server using a user-assigned managed identity
Enabling service-aided subnet configuration for
Azure SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Service-aided subnet configuration provides automated network configuration management for subnets
hosting managed instances. With service-aided subnet configuration user stays in full control of access to data
(TDS traffic flows) while managed instance takes responsibility to ensure uninterrupted flow of management
traffic in order to fulfill SLA.
Automatically configured network security groups and route table rules are visible to customer and annotated
with prefix Microsoft.Sql-managedInstances_UseOnly_.
Service-aided configuration is enabled automatically once you turn on subnet-delegation for
Microsoft.Sql/managedInstances resource provider.

IMPORTANT
Once subnet-delegation is turned on you could not turn it off until the very last virtual cluster is removed from the
subnet. For more details on virtual cluster lifetime see the following article.

NOTE
As service-aided subnet configuration is essential feature for maintaining SLA, starting May 1st 2020, it won't be possible
to deploy managed instances in subnets that are not delegated to managed instance resource provider. On July 1st 2020
all subnets containing managed instances will be automatically delegated to managed instance resource provider.

Enabling subnet-delegation for new deployments


To deploy managed instance in to empty subnet you need to delegate it to Microsoft.Sql/managedInstances
resource provider as described in following article. Please note that referenced article uses
Microsoft.DBforPostgreSQL/serversv2 resource provider for example. You'll need to use
Microsoft.Sql/managedInstances resource provider instead.

Enabling subnet-delegation for existing deployments


In order to enable subnet-delegation for your existing managed instance deployment you need to find out
virtual network subnet where it is placed.
To learn this you can check Virtual network/subnet at the Overview portal blade of your managed instance.
As an alternative, you could run the following PowerShell commands to learn this. Replace subscription-id
with your subscription ID. Also replace rg-name with the resource group for your managed instance, and
replace mi-name with the name of your managed instance.
Install-Module -Name Az

Import-Module Az.Accounts
Import-Module Az.Sql

Connect-AzAccount

# Use your subscription ID in place of subscription-id below

Select-AzSubscription -SubscriptionId {subscription-id}

# Replace rg-name with the resource group for your managed instance, and replace mi-name with the name of
your managed instance

$mi = Get-AzSqlInstance -ResourceGroupName {rg-name} -Name {mi-name}

$mi.SubnetId

Once you find managed instance subnet you need to delegate it to Microsoft.Sql/managedInstances resource
provider as described in following article. Please note that referenced article uses
Microsoft.DBforPostgreSQL/serversv2 resource provider for example. You'll need to use
Microsoft.Sql/managedInstances resource provider instead.

IMPORTANT
Enabling service-aided configuration doesn't cause failover or interruption in connectivity for managed instances that are
already in the subnet.
Configure public endpoint in Azure SQL Managed
Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Public endpoint for a managed instance enables data access to your managed instance from outside the virtual
network. You are able to access your managed instance from multi-tenant Azure services like Power BI, Azure
App Service, or an on-premises network. By using the public endpoint on a managed instance, you do not need
to use a VPN, which can help avoid VPN throughput issues.
In this article, you'll learn how to:
Enable public endpoint for your managed instance in the Azure portal
Enable public endpoint for your managed instance using PowerShell
Configure your managed instance network security group to allow traffic to the managed instance public
endpoint
Obtain the managed instance public endpoint connection string

Permissions
Due to the sensitivity of data that is in a managed instance, the configuration to enable managed instance public
endpoint requires a two-step process. This security measure adheres to separation of duties (SoD):
Enabling public endpoint on a managed instance needs to be done by the managed instance admin. The
managed instance admin can be found on Over view page of your managed instance resource.
Allowing traffic using a network security group that needs to be done by a network admin. For more
information, see network security group permissions.

Enabling public endpoint for a managed instance in the Azure portal


1. Launch the Azure portal at https://portal.azure.com/.
2. Open the resource group with the managed instance, and select the SQL managed instance that you want
to configure public endpoint on.
3. On the Security settings, select the Vir tual network tab.
4. In the Virtual network configuration page, select Enable and then the Save icon to update the configuration.
Enabling public endpoint for a managed instance using PowerShell
Enable public endpoint
Run the following PowerShell commands. Replace subscription-id with your subscription ID. Also replace rg-
name with the resource group for your managed instance, and replace mi-name with the name of your
managed instance.

Install-Module -Name Az

Import-Module Az.Accounts
Import-Module Az.Sql

Connect-AzAccount

# Use your subscription ID in place of subscription-id below

Select-AzSubscription -SubscriptionId {subscription-id}

# Replace rg-name with the resource group for your managed instance, and replace mi-name with the name of
your managed instance

$mi = Get-AzSqlInstance -ResourceGroupName {rg-name} -Name {mi-name}

$mi = $mi | Set-AzSqlInstance -PublicDataEndpointEnabled $true -force

Disable public endpoint


To disable the public endpoint using PowerShell, you would execute the following command (and also do not
forget to close the NSG for the inbound port 3342 if you have it configured):

Set-AzSqlInstance -PublicDataEndpointEnabled $false -force

Allow public endpoint traffic on the network security group


1. If you have the configuration page of the managed instance still open, navigate to the Over view tab.
Otherwise, go back to your SQL managed instance resource. Select the Vir tual network/subnet link,
which will take you to the Virtual network configuration page.

2. Select the Subnets tab on the left configuration pane of your Virtual network, and make note of the
SECURITY GROUP for your managed instance.

3. Go back to your resource group that contains your managed instance. You should see the Network
security group name noted above. Select the name to go into the network security group configuration
page.
4. Select the Inbound security rules tab, and Add a rule that has higher priority than the
deny_all_inbound rule with the following settings:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Source Any IP address or Service tag For Azure services like Power
BI, select the Azure Cloud
Service Tag
For your computer or Azure
virtual machine, use NAT IP
address

Source por t ranges * Leave this to * (any) as source ports


are usually dynamically allocated
and as such, unpredictable

Destination Any Leaving destination as Any to allow


traffic into the managed instance
subnet
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Destination por t ranges 3342 Scope destination port to 3342,


which is the managed instance
public TDS endpoint

Protocol TCP SQL Managed Instance uses TCP


protocol for TDS

Action Allow Allow inbound traffic to managed


instance through the public
endpoint

Priority 1300 Make sure this rule is higher priority


than the deny_all_inbound rule

NOTE
Port 3342 is used for public endpoint connections to managed instance, and cannot be changed at this point.

Obtaining the managed instance public endpoint connection string


1. Navigate to the managed instance configuration page that has been enabled for public endpoint. Select
the Connection strings tab under the Settings configuration.
2. Note that the public endpoint host name comes in the format <mi_name>.public .
<dns_zone>.database.windows.net and that the port used for the connection is 3342. Here's an example
of a server value of the connection string denoting the public endpoint port that can be used in SQL
Server Management Studio or Azure Data Studio connections:
<mi_name>.public.<dns_zone>.database.windows.net,3342

Next steps
Learn about using Azure SQL Managed Instance securely with public endpoint.
Configure minimal TLS version in Azure SQL
Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

The Minimal Transport Layer Security (TLS) Version setting allows customers to control the version of TLS used
by their Azure SQL Managed Instance.
At present we support TLS 1.0, 1.1 and 1.2. Setting a Minimal TLS Version ensures that subsequent, newer TLS
versions are supported. For example, e.g., choosing a TLS version greater than 1.1. means only connections with
TLS 1.1 and 1.2 are accepted and TLS 1.0 is rejected. After testing to confirm your applications supports it, we
recommend setting minimal TLS version to 1.2 since it includes fixes for vulnerabilities found in previous
versions and is the highest version of TLS supported in Azure SQL Managed Instance.
For customers with applications that rely on older versions of TLS, we recommend setting the Minimal TLS
Version per the requirements of your applications. For customers that rely on applications to connect using an
unencrypted connection, we recommend not setting any Minimal TLS Version.
For more information, see TLS considerations for SQL Database connectivity.
After setting the Minimal TLS Version, login attempts from clients that are using a TLS version lower than the
Minimal TLS Version of the server will fail with following error:

Error 47072
Login failed with invalid TLS version

Set minimal TLS version via PowerShell


NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRm modules are substantially identical. The following script requires the Azure PowerShell module.

The following PowerShell script shows how to Get and Set the Minimal TLS Version property at the
instance level:

#Get the Minimal TLS Version property


(Get-AzSqlInstance -Name sql-instance-name -ResourceGroupName resource-group).MinimalTlsVersion

# Update Minimal TLS Version Property


Set-AzSqlInstance -Name sql-instance-name -ResourceGroupName resource-group -MinimalTlsVersion "1.2"
Set Minimal TLS Version via Azure CLI
IMPORTANT
All scripts in this section requires Azure CLI.

Azure CLI in a bash shell


The following CLI script shows how to change the Minimal TLS Version setting in a bash shell:

# Get current setting for Minimal TLS Version


az sql mi show -n sql-instance-name -g resource-group --query "minimalTlsVersion"

# Update setting for Minimal TLS Version


az sql mi update -n sql-instance-name -g resource-group --set minimalTlsVersion="1.2"
Quickstart: Configure an Azure VM to connect to
Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This quickstart shows you how to configure an Azure virtual machine to connect to Azure SQL Managed
Instance using SQL Server Management Studio (SSMS).
For a quickstart showing how to connect from an on-premises client computer using a point-to-site connection
instead, see Configure a point-to-site connection.

Prerequisites
This quickstart uses the resources created in Create a managed instance as its starting point.

Sign in to the Azure portal


Sign in to the Azure portal.

Create a new subnet VNet


The following steps create a new subnet in the SQL Managed Instance VNet so an Azure virtual machine can
connect to the managed instance. The SQL Managed Instance subnet is dedicated to managed instances. You
can't create any other resources, like Azure virtual machines, in that subnet.
1. Open the resource group for the managed instance that you created in the Create a managed instance
quickstart. Select the virtual network for your managed instance.

2. Select Subnets and then select + Subnet to create a new subnet.


3. Fill out the form using the information in this table:

SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Name Any valid name For valid names, see Naming rules
and restrictions.

Address range (CIDR block) A valid range The default value is good for this
quickstart.

Network security group None The default value is good for this
quickstart.

Route table None The default value is good for this


quickstart.

Ser vice endpoints 0 selected The default value is good for this
quickstart.

Subnet delegation None The default value is good for this


quickstart.
4. Select OK to create this additional subnet in the SQL Managed Instance VNet.

Create a VM in the new subnet


The following steps show you how to create a virtual machine in the new subnet to connect to SQL Managed
Instance.

Prepare the Azure virtual machine


Since SQL Managed Instance is placed in your private virtual network, you need to create an Azure VM with an
installed SQL client tool, like SQL Server Management Studio or Azure Data Studio. This tool lets you connect to
SQL Managed Instance and execute queries. This quickstart uses SQL Server Management Studio.
The easiest way to create a client virtual machine with all necessary tools is to use the Azure Resource Manager
templates.
1. Make sure that you're signed in to the Azure portal in another browser tab. Then, select the following
button to create a client virtual machine and install SQL Server Management Studio:

2. Fill out the form using the information in the following table:
SET T IN G SUGGEST ED VA L UE DESC RIP T IO N

Subscription A valid subscription Must be a subscription in which you


have permission to create new
resources.

Resource Group The resource group that you This resource group must be the
specified in the Create SQL one in which the VNet exists.
Managed Instance quickstart

Location The location for the resource group This value is populated based on the
resource group selected.

Vir tual machine name Any valid name For valid names, see Naming rules
and restrictions.

Admin Username Any valid username For valid names, see Naming rules
and restrictions. Don't use
"serveradmin" as that is a reserved
server-level role.
You use this username any time you
connect to the VM.

Password Any valid password The password must be at least 12


characters long and meet the
defined complexity requirements.
You use this password any time you
connect to the VM.

Vir tual Machine Size Any valid size The default in this template of
Standard_B2s is sufficient for this
quickstart.

Location [resourceGroup().location]. Don't change this value.

Vir tual Network Name The virtual network in which you


created the managed instance

Subnet name The name of the subnet that you Don't choose the subnet in which
created in the previous procedure you created the managed instance.

ar tifacts Location [deployment().properties.templateLi Don't change this value.


nk.uri]

ar tifacts Location Sas token Leave blank Don't change this value.
If you used the suggested VNet name and the default subnet in creating your SQL Managed Instance, you
don't need to change last two parameters. Otherwise you should change these values to the values that
you entered when you set up the network environment.
3. Select the I agree to the terms and conditions stated above checkbox.
4. Select Purchase to deploy the Azure VM in your network.
5. Select the Notifications icon to view the status of deployment.
IMPORTANT
Do not continue until approximately 15 minutes after the virtual machine is created to give time for the post-creation
scripts to install SQL Server Management Studio.

Connect to the virtual machine


The following steps show you how to connect to your newly created virtual machine using a Remote Desktop
connection.
1. After deployment completes, go to the virtual machine resource.

2. Select Connect .
A Remote Desktop Protocol file (.rdp file) form appears with the public IP address and port number for
the virtual machine.
3. Select Download RDP File .

NOTE
You can also use SSH to connect to your VM.

4. Close the Connect to vir tual machine form.


5. To connect to your VM, open the downloaded RDP file.
6. When prompted, select Connect . On a Mac, you need an RDP client such as this Remote Desktop Client
from the Mac App Store.
7. Enter the username and password you specified when creating the virtual machine, and then choose OK .
8. You might receive a certificate warning during the sign-in process. Choose Yes or Continue to proceed
with the connection.
You're connected to your virtual machine in the Server Manager dashboard.

Connect to SQL Managed Instance


1. In the virtual machine, open SQL Server Management Studio.
It takes a few moments to open, as it needs to complete its configuration since this is the first time SSMS
has been started.
2. In the Connect to Ser ver dialog box, enter the fully qualified host name for your managed instance in
the Ser ver name box. Select SQL Ser ver Authentication , provide your username and password, and
then select Connect .
After you connect, you can view your system and user databases in the Databases node, and various objects in
the Security, Server Objects, Replication, Management, SQL Server Agent, and XEvent Profiler nodes.

Next steps
For a quickstart showing how to connect from an on-premises client computer using a point-to-site
connection, see Configure a point-to-site connection.
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
To restore an existing SQL Server database from on-premises to a managed instance, you can use Azure
Database Migration Service for migration or the T-SQL RESTORE command to restore from a database
backup file.
Quickstart: Configure a point-to-site connection to
Azure SQL Managed Instance from on-premises
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This quickstart demonstrates how to connect to Azure SQL Managed Instance using SQL Server Management
Studio (SSMS) from an on-premises client computer over a point-to-site connection. For information about
point-to-site connections, see About Point-to-Site VPN.

Prerequisites
This quickstart:
Uses the resources created in Create a managed instance as its starting point.
Requires PowerShell 5.1 and Azure PowerShell 1.4.0 or later on your on-premises client computer. If
necessary, see the instructions for installing the Azure PowerShell module.
Requires the newest version of SQL Server Management Studio on your on-premises client computer.

Attach a VPN gateway to a virtual network


1. Open PowerShell on your on-premises client computer.
2. Copy this PowerShell script. This script attaches a VPN gateway to the SQL Managed Instance virtual
network that you created in the Create a managed instance quickstart. This script uses the Azure
PowerShell Az Module and does the following for either Windows or Linux-based hosts:
Creates and installs certificates on a client machine
Calculates the future VPN gateway subnet IP range
Creates the gateway subnet
Deploys the Azure Resource Manager template that attaches the VPN gateway to the VPN subnet

$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/attach-vpn-gateway'

$parameters = @{
subscriptionId = '<subscriptionId>'
resourceGroupName = '<resourceGroupName>'
virtualNetworkName = '<virtualNetworkName>'
certificateNamePrefix = '<certificateNamePrefix>'
}

Invoke-Command -ScriptBlock ([Scriptblock]::Create((iwr


($scriptUrlBase+'/attachVPNGateway.ps1?t='+ [DateTime]::Now.Ticks)).Content)) -ArgumentList
$parameters, $scriptUrlBase

3. Paste the script in your PowerShell window and provide the required parameters. The values for
<subscriptionId> , <resourceGroup> , and <virtualNetworkName> should match the ones that you used for
the Create a managed instance quickstart. The value for <certificateNamePrefix> can be a string of your
choice.
4. Execute the PowerShell script.

IMPORTANT
Do not continue until the PowerShell script completes.

Create a VPN connection


1. Sign in to the Azure portal.
2. Open the resource group in which you created the virtual network gateway, and then open the virtual
network gateway resource.
3. Select Point-to-site configuration and then select Download VPN client .

4. On your on-premises client computer, extract the files from the zip file and then open the folder with the
extracted files.
5. Open the WindowsAmd64 folder and open the VpnClientSetupAmd64.exe file.
6. If you receive a Windows protected your PC message, click More info and then click Run anyway .
7. In the User Account Control dialog box, click Yes to continue.
8. In the dialog box referencing your virtual network, select Yes to install the VPN client for your virtual
network.

Connect to the VPN connection


1. Go to VPN in Network & Internet on your on-premises client computer and select your SQL Managed
Instance virtual network to establish a connection to this VNet. In the following image, the VNet is named
MyNewVNet .
2. Select Connect .
3. In the dialog box, select Connect .

4. When you're prompted that Connection Manager needs elevated privileges to update your route table,
choose Continue .
5. Select Yes in the User Account Control dialog box to continue.
You've established a VPN connection to your SQL Managed Instance VNet.

Connect with SSMS


1. On the on-premises client computer, open SQL Server Management Studio.
2. In the Connect to Ser ver dialog box, enter the fully qualified host name for your managed instance in
the Ser ver name box.
3. Select SQL Ser ver Authentication , provide your username and password, and then select Connect .
After you connect, you can view your system and user databases in the Databases node. You can also view
various objects in the Security, Server Objects, Replication, Management, SQL Server Agent, and XEvent Profiler
nodes.

Next steps
For a quickstart showing how to connect from an Azure virtual machine, see Configure a point-to-site
connection.
For an overview of the connection options for applications, see Connect your applications to SQL Managed
Instance.
To restore an existing SQL Server database from on-premises to a managed instance, you can use Azure
Database Migration Service for migration or the T-SQL RESTORE command to restore from a database
backup file.
Manage Azure SQL Managed Instance long-term
backup retention
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


In Azure SQL Managed Instance, you can configure a long-term backup retention policy (LTR). This allows you to
automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can
then recover a database using these backups with the Azure portal and PowerShell.
The following sections show you how to use the Azure portal, PowerShell, and Azure CLI to configure the long-
term backup retention, view backups in Azure SQL storage, and restore from a backup in Azure SQL storage.

Prerequisites
Portal
Azure CLI
PowerShell

An active Azure subscription.

Create long-term retention policies


You can configure SQL Managed Instance to retain automated backups for a period longer than the retention
period for your service tier.
Portal
Azure CLI
PowerShell

1. In the Azure portal, select your managed instance and then click Backups . On the Retention policies
tab, select the database(s) on which you want to set or modify long-term backup retention policies.
Changes will not apply to any databases left unselected.

2. In the Configure policies pane, specify your desired retention period for weekly, monthly, or yearly
backups. Choose a retention period of '0' to indicate that no long-term backup retention should be set.
3. When complete, click Apply .

IMPORTANT
When you enable a long-term backup retention policy, it may take up to 7 days for the first backup to become visible and
available to restore. For details of the LTR backup cadence, see long-term backup retention.

View backups and restore from a backup


Portal
Azure CLI
PowerShell

View the backups that are retained for a specific database with an LTR policy, and restore from those backups.
1. In the Azure portal, select your managed instance and then click Backups . On the Available backups
tab, select the database for which you want to see available backups. Click Manage .
2. In the Manage backups pane, review the available backups.

3. Select the backup from which you want to restore, click Restore , then on the restore page specify the
new database name. The backup and source will be pre-populated on this page.
4. Click Review + Create to review your Restore details. Then click Create to restore your database from
the chosen backup.
5. On the toolbar, click the notification icon to view the status of the restore job.

6. When the restore job is completed, open the Managed Instance Over view page to view the newly
restored database.
NOTE
From here, you can connect to the restored database using SQL Server Management Studio to perform needed tasks,
such as to extract a bit of data from the restored database to copy into the existing database or to delete the existing
database and rename the restored database to the existing database name.

Next steps
To learn about service-generated automatic backups, see automatic backups
To learn about long-term backup retention, see long-term backup retention
Quickstart: Restore a database to Azure SQL
Managed Instance with SSMS
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


In this quickstart, you'll use SQL Server Management Studio (SSMS) to restore a database from Azure Blob
Storage to Azure SQL Managed Instance.
The quickstart restores the Wide World Importers database from a backup file. You'll see two ways to restore the
database in SSMS:
A restore wizard
T-SQL statements

NOTE
For more information on migration using Azure Database Migration Service, see Tutorial: Migrate SQL Server to an
Azure Managed Instance using Database Migration Service.
For more information on various migration methods, see SQL Server to Azure SQL Managed Instance Guide.

Prerequisites
This quickstart:
Uses resources from the Create a managed instance quickstart.
Requires the latest version of SSMS installed.
Requires SSMS to connect to SQL Managed Instance. See these quickstarts on how to connect:
Enable a public endpoint on SQL Managed Instance. This approach is recommended for this
quickstart.
Connect to SQL Managed Instance from an Azure VM.
Configure a point-to-site connection to SQL Managed Instance from on-premises.

NOTE
For more information on backing up and restoring a SQL Server database by using Blob Storage and a shared access
signature key, see SQL Server Backup to URL.

Use the restore wizard to restore from a backup file


In SSMS, take the steps in the following sections to restore the Wide World Importers database to SQL Managed
Instance by using the restore wizard. The database backup file is stored in a pre-configured Blob Storage
account.
Open the restore wizard
1. Open SSMS and connect to your managed instance.
2. In Object Explorer , right-click the Databases folder of your managed instance, and then select Restore
Database to open the restore wizard.

Select the backup source


1. In the restore wizard, select the ellipsis (...) to select the source of the backup set to restore.

2. In Select backup devices , select Add . In Backup media type , URL is the only option that's available
because it's the only source type that's supported. Select OK .
3. In Select a Backup File Location , choose from one of three options to provide information about the
location of your backup files:
Select a pre-registered storage container from the Azure storage container list.
Enter a new storage container and a shared access signature. A new SQL credential will be registered
for you.
Select Add to browse more storage containers from your Azure subscription.

If you select Add , proceed to the next section, Browse Azure subscription storage containers. If you use a
different method to provide the location of the backup files, skip to Restore the database.
Browse Azure subscription storage containers
1. In Connect to a Microsoft Subscription , select Sign in to sign in to your Azure subscription.
2. Sign in to your Microsoft Account to initiate the session in Azure.

3. Select the subscription of the storage account that contains the backup files.
4. Select the storage account that contains the backup files.

5. Select the blob container that contains the backup files.


6. Enter the expiration date of the shared access policy and select Create Credential . A shared access
signature with the correct permissions is created. Select OK .

Restore the database


Now that you've selected a storage container, you should see the Locate Backup File in Microsoft Azure
dialog.
1. In the left pane, expand the folder structure to show the folder that contains the backup files. In the right
pane, select all the backup files that are related to the backup set that you're restoring, and then select OK .
SSMS validates the backup set. This process takes at most a few seconds. The duration depends on the
size of the backup set.
2. If the backup is validated, you need to specify a name for the database that's being restored. By default,
under Destination , the Database box contains the name of the backup set database. To change the
name, enter a new name for Database . Select OK .

The restore process starts. The duration depends on the size of the backup set.
3. When the restore process finishes, a dialog shows that it was successful. Select OK .

4. In Object Explorer , check the restored database.


Use T-SQL to restore from a backup file
As an alternative to the restore wizard, you can use T-SQL statements to restore a database. In SSMS, follow
these steps to restore the Wide World Importers database to SQL Managed Instance by using T-SQL. The
database backup file is stored in a pre-configured Blob Storage account.
1. Open SSMS and connect to your managed instance.
2. In Object Explorer , right-click your managed instance and select New Quer y to open a new query
window.
3. Run the following T-SQL statement, which uses a pre-configured storage account and a shared access
signature key to create a credential in your managed instance.

IMPORTANT
CREDENTIALmust match the container path, begin with https , and can't contain a trailing forward slash.
IDENTITY must be SHARED ACCESS SIGNATURE .
SECRET must be the shared access signature token and can't contain a leading ? .

CREATE CREDENTIAL [https://mitutorials.blob.core.windows.net/databases]


WITH IDENTITY = 'SHARED ACCESS SIGNATURE'
, SECRET = 'sv=2017-11-09&ss=bfqt&srt=sco&sp=rwdlacup&se=2028-09-06T02:52:55Z&st=2018-09-
04T18:52:55Z&spr=https&sig=WOTiM%2FS4GVF%2FEEs9DGQR9Im0W%2BwndxW2CQ7%2B5fHd7Is%3D'
4. To check your credential, run the following statement, which uses a container URL to get a backup file list.

RESTORE FILELISTONLY FROM URL =


'https://mitutorials.blob.core.windows.net/databases/WideWorldImporters-Standard.bak'

5. Run the following statement to restore the Wide World Importers database.

RESTORE DATABASE [Wide World Importers] FROM URL =


'https://mitutorials.blob.core.windows.net/databases/WideWorldImporters-Standard.bak'

If the restore process is terminated with the message ID 22003, create a new backup file that contains
backup checksums, and start the restore process again. See Enable or disable backup checksums during
backup or restore.
6. Run the following statement to track the status of your restore process.

SELECT session_id as SPID, command, a.text AS Query, start_time, percent_complete


, dateadd(second,estimated_completion_time/1000, getdate()) as estimated_completion_time
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) a
WHERE r.command in ('BACKUP DATABASE','RESTORE DATABASE')

7. When the restore process finishes, view the database in Object Explorer . You can verify that the
database is restored by using the sys.dm_operation_status view.
NOTE
A database restore operation is asynchronous and retryable. You might get an error in SSMS if the connection fails or a
time-out expires. SQL Managed Instance keeps trying to restore the database in the background, and you can track the
progress of the restore process by using the sys.dm_exec_requests and sys.dm_operation_status views.
In some phases of the restore process, you see a unique identifier instead of the actual database name in the system
views. To learn about RESTORE statement behavior differences, see T-SQL differences between SQL Server & Azure SQL
Managed Instance.

Next steps
For information about troubleshooting a backup to a URL, see SQL Server Backup to URL best practices and
troubleshooting.
For an overview of app connection options, see Connect your applications to SQL Managed Instance.
To query by using your favorite tools or languages, see Quickstarts: Azure SQL Database connect and query.
Tutorial: Security in Azure SQL Managed Instance
using Azure AD server principals (logins)
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance provides nearly all security features that the latest SQL Server (Enterprise Edition)
database engine has:
Limit access in an isolated environment
Use authentication mechanisms that require identity: Azure Active Directory (Azure AD) and SQL
Authentication
Use authorization with role-based memberships and permissions
Enable security features
In this tutorial, you learn how to:
Create an Azure AD server principal (login) for a managed instance
Grant permissions to Azure AD server principals (logins) in a managed instance
Create Azure AD users from Azure AD server principals (logins)
Assign permissions to Azure AD users and manage database security
Use impersonation with Azure AD users
Use cross-database queries with Azure AD users
Learn about security features, such as threat protection, auditing, data masking, and encryption
To learn more, see the Azure SQL Managed Instance overview.

Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
SQL Server Management Studio (SSMS)
A managed instance
Follow this article: Quickstart: Create a managed instance
Able to access your managed instance and provisioned an Azure AD administrator for the managed instance.
To learn more, see:
Connect your application to a managed instance
SQL Managed Instance connectivity architecture
Configure and manage Azure Active Directory authentication with SQL

Limit access
Managed instances can be accessed through a private IP address. Much like an isolated SQL Server
environment, applications or users need access to the SQL Managed Instance network (VNet) before a
connection can be established. For more information, see Connect your application to SQL Managed Instance.
It is also possible to configure a service endpoint on a managed instance, which allows for public connections in
the same fashion as for Azure SQL Database. For more information, see Configure public endpoint in Azure SQL
Managed Instance.
NOTE
Even with service endpoints enabled, Azure SQL Database firewall rules do not apply. Azure SQL Managed Instance has its
own built-in firewall to manage connectivity.

Create an Azure AD server principal (login) using SSMS


The first Azure AD server principal (login) can be created by the standard SQL admin account (non-Azure AD)
that is a sysadmin , or the Azure AD admin for the managed instance created during the provisioning process.
For more information, see Provision an Azure Active Directory administrator for SQL Managed Instance.
See the following articles for examples of connecting to SQL Managed Instance:
Quickstart: Configure Azure VM to connect to SQL Managed Instance
Quickstart: Configure a point-to-site connection to SQL Managed Instance from on-premises
1. Log into your managed instance using a standard SQL login account (non-Azure AD) that is a sysadmin
or an Azure AD admin for SQL Managed Instance, using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. In the query window, use the following syntax to create a login for a local Azure AD account:

USE master
GO
CREATE LOGIN login_name FROM EXTERNAL PROVIDER
GO

This example creates a login for the account nativeuser@aadsqlmi.onmicrosoft.com.

USE master
GO
CREATE LOGIN [nativeuser@aadsqlmi.onmicrosoft.com] FROM EXTERNAL PROVIDER
GO

4. On the toolbar, select Execute to create the login.


5. Check the newly added login, by executing the following T-SQL command:

SELECT *
FROM sys.server_principals;
GO

For more information, see CREATE LOGIN.

Grant permissions to create logins


To create other Azure AD server principals (logins), SQL Server roles or permissions must be granted to the
principal (SQL or Azure AD).
SQL authentication
If the login is a SQL principal, only logins that are part of the sysadmin role can use the create command to
create logins for an Azure AD account.
Azure AD authentication
To allow the newly created Azure AD server principal (login) the ability to create other logins for other Azure
AD users, groups, or applications, grant the login sysadmin or securityadmin server role.
At a minimum, ALTER ANY LOGIN permission must be granted to the Azure AD server principal (login) to
create other Azure AD server principals (logins).
By default, the standard permission granted to newly created Azure AD server principals (logins) in master is:
CONNECT SQL and VIEW ANY DATABASE .
The sysadmin server role can be granted to many Azure AD server principals (logins) within a managed
instance.
To add the login to the sysadmin server role:
1. Log into the managed instance again, or use the existing connection with the Azure AD admin or SQL
principal that is a sysadmin .
2. In Object Explorer , right-click the server and choose New Quer y .
3. Grant the Azure AD server principal (login) the sysadmin server role by using the following T-SQL syntax:

ALTER SERVER ROLE sysadmin ADD MEMBER login_name


GO

The following example grants the sysadmin server role to the login
nativeuser@aadsqlmi.onmicrosoft.com

ALTER SERVER ROLE sysadmin ADD MEMBER [nativeuser@aadsqlmi.onmicrosoft.com]


GO

Create additional Azure AD server principals (logins) using SSMS


Once the Azure AD server principal (login) has been created, and provided with sysadmin privileges, that login
can create additional logins using the FROM EXTERNAL PROVIDER clause with CREATE LOGIN .
1. Connect to the managed instance with the Azure AD server principal (login), using SQL Server
Management Studio. Enter your SQL Managed Instance host name. For Authentication in SSMS, there are
three options to choose from when logging in with an Azure AD account:
Active Directory - Universal with MFA support
Active Directory - Password
Active Directory - Integrated
For more information, see Universal Authentication (SSMS support for Multi-Factor
Authentication).
2. Select Active Director y - Universal with MFA suppor t . This brings up a Multi-Factor Authentication
login window. Sign in with your Azure AD password.

3. In SSMS Object Explorer , right-click the server and choose New Quer y .
4. In the query window, use the following syntax to create a login for another Azure AD account:
USE master
GO
CREATE LOGIN login_name FROM EXTERNAL PROVIDER
GO

This example creates a login for the Azure AD user bob@aadsqlmi.net, whose domain aadsqlmi.net is
federated with the Azure AD aadsqlmi.onmicrosoft.com domain.
Execute the following T-SQL command. Federated Azure AD accounts are the SQL Managed Instance
replacements for on-premises Windows logins and users.

USE master
GO
CREATE LOGIN [bob@aadsqlmi.net] FROM EXTERNAL PROVIDER
GO

5. Create a database in the managed instance using the CREATE DATABASE syntax. This database will be
used to test user logins in the next section.
a. In Object Explorer , right-click the server and choose New Quer y .
b. In the query window, use the following syntax to create a database named MyMITestDB .

CREATE DATABASE MyMITestDB;


GO

6. Create a SQL Managed Instance login for a group in Azure AD. The group will need to exist in Azure AD
before you can add the login to SQL Managed Instance. See Create a basic group and add members
using Azure Active Directory. Create a group mygroup and add members to this group.
7. Open a new query window in SQL Server Management Studio.
This example assumes there exists a group called mygroup in Azure AD. Execute the following command:

USE master
GO
CREATE LOGIN [mygroup] FROM EXTERNAL PROVIDER
GO

8. As a test, log into the managed instance with the newly created login or group. Open a new connection to
the managed instance, and use the new login when authenticating.
9. In Object Explorer , right-click the server and choose New Quer y for the new connection.
10. Check server permissions for the newly created Azure AD server principal (login) by executing the
following command:

SELECT * FROM sys.fn_my_permissions (NULL, 'DATABASE')


GO

Guest users are supported as individual users (without being part of an AAD group (although they can be)) and
the logins can be created in master directly (for example, joe@contoso.con) using the current login syntax.

Create an Azure AD user from the Azure AD server principal (login)


Authorization to individual databases works much in the same way in SQL Managed Instance as it does with
databases in SQL Server. A user can be created from an existing login in a database, and be provided with
permissions on that database, or added to a database role.
Now that we've created a database called MyMITestDB , and a login that only has default permissions, the next
step is to create a user from that login. At the moment, the login can connect to the managed instance, and see
all the databases, but can't interact with the databases. If you sign in with the Azure AD account that has the
default permissions, and try to expand the newly created database, you'll see the following error:

For more information on granting database permissions, see Getting Started with Database Engine Permissions.
Create an Azure AD user and create a sample table
1. Log into your managed instance using a sysadmin account using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. In the query window, use the following syntax to create an Azure AD user from an Azure AD server
principal (login):

USE <Database Name> -- provide your database name


GO
CREATE USER user_name FROM LOGIN login_name
GO

The following example creates a user bob@aadsqlmi.net from the login bob@aadsqlmi.net:

USE MyMITestDB
GO
CREATE USER [bob@aadsqlmi.net] FROM LOGIN [bob@aadsqlmi.net]
GO

4. It's also supported to create an Azure AD user from an Azure AD server principal (login) that is a group.
The following example creates a login for the Azure AD group mygroup that exists in your Azure AD
instance.

USE MyMITestDB
GO
CREATE USER [mygroup] FROM LOGIN [mygroup]
GO

All users that belong to mygroup can access the MyMITestDB database.

IMPORTANT
When creating a USER from an Azure AD server principal (login), specify the user_name as the same login_name
from LOGIN.

For more information, see CREATE USER.


5. In a new query window, create a test table using the following T-SQL command:

USE MyMITestDB
GO
CREATE TABLE TestTable
(
AccountNum varchar(10),
City varchar(255),
Name varchar(255),
State varchar(2)
);

6. Create a connection in SSMS with the user that was created. You'll notice that you cannot see the table
TestTable that was created by the sysadmin earlier. We need to provide the user with permissions to
read data from the database.
7. You can check the current permission the user has by executing the following command:

SELECT * FROM sys.fn_my_permissions('MyMITestDB','DATABASE')


GO

Add users to database -level roles


For the user to see data in the database, we can provide database-level roles to the user.
1. Log into your managed instance using a sysadmin account using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. Grant the Azure AD user the db_datareader database role by using the following T-SQL syntax:

Use <Database Name> -- provide your database name


ALTER ROLE db_datareader ADD MEMBER user_name
GO

The following example provides the user bob@aadsqlmi.net and the group mygroup with db_datareader
permissions on the MyMITestDB database:

USE MyMITestDB
GO
ALTER ROLE db_datareader ADD MEMBER [bob@aadsqlmi.net]
GO
ALTER ROLE db_datareader ADD MEMBER [mygroup]
GO

4. Check the Azure AD user that was created in the database exists by executing the following command:

SELECT * FROM sys.database_principals


GO

5. Create a new connection to the managed instance with the user that has been added to the
db_datareader role.

6. Expand the database in Object Explorer to see the table.


7. Open a new query window and execute the following SELECT statement:

SELECT *
FROM TestTable

Are you able to see data from the table? You should see the columns being returned.

Impersonate Azure AD server-level principals (logins)


SQL Managed Instance supports the impersonation of Azure AD server-level principals (logins).
Test impersonation
1. Log into your managed instance using a sysadmin account using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. In the query window, use the following command to create a new stored procedure:

USE MyMITestDB
GO
CREATE PROCEDURE dbo.usp_Demo
WITH EXECUTE AS 'bob@aadsqlmi.net'
AS
SELECT user_name();
GO

4. Use the following command to see that the user you're impersonating when executing the stored
procedure is bob@aadsqlmi.net .

Exec dbo.usp_Demo

5. Test impersonation by using the EXECUTE AS LOGIN statement:

EXECUTE AS LOGIN = 'bob@aadsqlmi.net'


GO
SELECT SUSER_SNAME()
REVERT
GO
NOTE
Only the SQL server-level principals (logins) that are part of the sysadmin role can execute the following operations
targeting Azure AD principals:
EXECUTE AS USER
EXECUTE AS LOGIN

Use cross-database queries


Cross-database queries are supported for Azure AD accounts with Azure AD server principals (logins). To test a
cross-database query with an Azure AD group, we need to create another database and table. You can skip
creating another database and table if one already exists.
1. Log into your managed instance using a sysadmin account using SQL Server Management Studio.
2. In Object Explorer , right-click the server and choose New Quer y .
3. In the query window, use the following command to create a database named MyMITestDB2 and table
named TestTable2 :

CREATE DATABASE MyMITestDB2;


GO
USE MyMITestDB2
GO
CREATE TABLE TestTable2
(
EmpId varchar(10),
FirstName varchar(255),
LastName varchar(255),
Status varchar(10)
);

4. In a new query window, execute the following command to create the user mygroup in the new database
MyMITestDB2 , and grant SELECT permissions on that database to mygroup:

USE MyMITestDB2
GO
CREATE USER [mygroup] FROM LOGIN [mygroup]
GO
GRANT SELECT TO [mygroup]
GO

5. Sign into the managed instance using SQL Server Management Studio as a member of the Azure AD
group mygroup. Open a new query window and execute the cross-database SELECT statement:

USE MyMITestDB
SELECT * FROM MyMITestDB2..TestTable2
GO

You should see the table results from TestTable2 .

Additional supported scenarios


SQL Agent management and job executions are supported for Azure AD server principals (logins).
Database backup and restore operations can be executed by Azure AD server principals (logins).
Auditing of all statements related to Azure AD server principals (logins) and authentication events.
Dedicated administrator connection for Azure AD server principals (logins) that are members of the
sysadmin server-role.
Azure AD server principals (logins) are supported with using the sqlcmd utility and SQL Server Management
Studio tool.
Logon triggers are supported for logon events coming from Azure AD server principals (logins).
Service Broker and DB mail can be setup using Azure AD server principals (logins).

Next steps
Enable security features
See the SQL Managed Instance security features article for a comprehensive list of ways to secure your
database. The following security features are discussed:
SQL Managed Instance auditing
Always Encrypted
Threat detection
Dynamic data masking
Row-level security
Transparent data encryption (TDE)
SQL Managed Instance capabilities
For a complete overview of SQL Managed Instance capabilities, see:
SQL Managed Instance capabilities
Tutorial: Add SQL Managed Instance to a failover
group
7/12/2022 • 34 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Add managed instances of Azure SQL Managed Instance to an auto-failover group.
In this tutorial, you will learn how to:
Create a primary managed instance.
Create a secondary managed instance as part of a failover group.
Test failover.
There are multiple ways to establish connectivity between managed instances in different Azure regions,
including:
Global virtual network peering - the most performant and recommended way
Azure ExpressRoute
VPN gateways
This tutorial provides steps for global virtual network peering. If you prefer to use ExpressRoute or VPN
gateways, replace the peering steps accordingly, or skip ahead to Step 7 if you already have ExpressRoute or
VPN gateways configured.

IMPORTANT
When going through this tutorial, ensure you are configuring your resources with the prerequisites for setting up
failover groups for SQL Managed Instance.
Creating a managed instance can take a significant amount of time. As a result, this tutorial may take several hours to
complete. For more information on provisioning times, see SQL Managed Instance management operations.

Prerequisites
Portal
PowerShell

To complete this tutorial, make sure you have:


An Azure subscription. Create a free account if you don't already have one.

Create a resource group and primary managed instance


In this step, you will create the resource group and the primary managed instance for your failover group using
the Azure portal or PowerShell.
Deploy both managed instances to paired regions for data replication performance reasons. Managed instances
residing in geo-paired regions have much better data replication performance compared to instances residing in
unpaired regions.
Portal
PowerShell

Create the resource group and your primary managed instance using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to
favorite it and add it as an item in the left-hand navigation.
2. Select + Add to open the Select SQL deployment option page. You can view additional information
about the different databases by selecting Show details on the Databases tile.
3. Select Create on the SQL Managed Instances tile.

4. On the Create Azure SQL Managed Instance page, on the Basics tab:
a. Under Project Details , select your Subscription from the drop-down and then choose to Create
New resource group. Type in a name for your resource group, such as myResourceGroup .
b. Under SQL Managed Instance Details , provide the name of your managed instance, and the region
where you would like to deploy your managed instance. Leave Compute + storage at default values.
c. Under Administrator Account , provide an admin login, such as azureuser , and a complex admin
password.
5. Leave the rest of the settings at default values, and select Review + create to review your SQL Managed
Instance settings.
6. Select Create to create your primary managed instance.

Create secondary virtual network


If you're using the Azure portal to create your secondary managed instance, you will need to create the virtual
network before creating the instance to make sure that the subnets of the primary and secondary managed
instance do not have overlapping IP address ranges. If you're using PowerShell to configure your managed
instance, skip ahead to step 3.
Portal
PowerShell

To verify the subnet range of your primary virtual network, follow these steps:
1. In the Azure portal, navigate to your resource group and select the virtual network for your primary
instance.
2. Select Subnets under Settings and note the Address range of the subnet created automatically during
creation of your primary instance. The subnet IP address range of the virtual network for the secondary
managed instance must not overlap with the IP address range of the subnet hosting primary instance.
To create a virtual network, follow these steps:
1. In the Azure portal, select Create a resource and search for virtual network.
2. Select the Vir tual Network option and then select Create on the next page.
3. Fill out the required fields to configure the virtual network for your secondary managed instance, and
then select Create .
The following table shows the required fields and corresponding values for the secondary virtual
network:

F IEL D VA L UE

Name The name for the virtual network to be used by the


secondary managed instance, such as
vnet-sql-mi-secondary .

Address space The address space for your virtual network, such as
10.128.0.0/16 .

Subscription The subscription where your primary managed instance


and resource group reside.

Region The location where you will deploy your secondary


managed instance.

Subnet The name for your subnet. default is offered as a


default name.

Address range The IP address range for your subnet, such as


10.128.0.0/24 . This must not overlap with the IP
address range used by the virtual network subnet of
your primary managed instance.
Create a secondary managed instance
In this step you will create a secondary managed instance, which will also configure the networking between the
two managed instances.
Your second managed instance must be:
Empty, i.e. with no user databases on it.
Hosted in a virtual network subnet that has no IP address range overlap with the virtual network subnet
hosting the primary managed instance.

Portal
PowerShell

1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to
add it as a favorite item in the left-hand navigation.
2. Select + Add to open the Select SQL deployment option page. You can view additional information
about the different databases by selecting Show details on the Databases tile.
3. Select Create on the SQL managed instances tile.
4. On the Basics tab of the Create Azure SQL Managed Instance page, fill out the required fields to
configure your secondary managed instance.
The following table shows the values necessary for the secondary managed instance:

F IEL D VA L UE

Subscription The Azure subscription to create the instance in. When


using Azure portal, it must be the same subscription as
for primary instance.

Resource group The resource group to create secondary managed


instance in.

SQL Managed Instance name The name of your new secondary managed instance,
such as sql-mi-secondary .

Region The Azure region for your secondary managed instance.

SQL Managed Instance admin login The login you want to use for your new secondary
managed instance, such as azureuser .

Password A complex password that will be used by the admin login


for the new secondary managed instance.

5. Under the Networking tab, for the Vir tual Network , select from the drop-down list the virtual network
you previously created for the secondary managed instance.

6. Under the Additional settings tab, for Geo-Replication , choose Yes to Use as failover secondary.
Select the primary managed instance from the drop-down.
Be sure that the collation and time zone match that of the primary managed instance. The primary
managed instance created in this tutorial used the default of SQL_Latin1_General_CP1_CI_AS collation and
the (UTC) Coordinated Universal Time time zone.
7. Select Review + create to review the settings for your secondary managed instance.
8. Select Create to create your secondary managed instance.

Create a global virtual network peering


NOTE
The steps listed below will create peering links between the virtual networks in both directions.

Portal
PowerShell

1. In the Azure portal, go to the Vir tual network resource for your primary managed instance.
2. Select Peerings under Settings and then select + Add.

1. Enter or select values for the following settings:

SET T IN GS DESC RIP T IO N

This vir tual network


SET T IN GS DESC RIP T IO N

Peering link name The name for the peering must be unique within the
virtual network.

Traffic to remote virtual network Select Allow (default) to enable communication


between the two virtual networks through the default
VirtualNetwork flow. Enabling communication
between virtual networks allows resources that are
connected to either virtual network to communicate with
each other with the same bandwidth and latency as if
they were connected to the same virtual network. All
communication between resources in the two virtual
networks is over the Azure private network.

Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering

Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.

Remote vir tual network

Peering link name The name of the same peering to be used in the virtual
network hosting secondary instance.

Virtual network deployment model Select Resource manager .

I know my resource ID Leave this checkbox unchecked.

Subscription Select the Azure subscription of the virtual network


hosting the secondary instance that you want to peer
with.

Virtual network Select the virtual network hosting the secondary


instance that you want to peer with. If the virtual
network is listed, but grayed out, it may be because the
address space for the virtual network overlaps with the
address space for this virtual network. If virtual network
address spaces overlap, they cannot be peered.

Traffic to remote virtual network Select Allow (default)

Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering.

Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.

2. Click Add to configure the peering with the virtual network you selected. After a few seconds, select the
Refresh button and the peering status will change from Updating to Connected.
Create a failover group
In this step, you will create the failover group and add both managed instances to it.

Portal
PowerShell

Create the failover group using the Azure portal.


1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , and then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to
add it as a favorite item in the left-hand navigation.
2. Select the primary managed instance you created in the first section, such as sql-mi-primary .
3. Under Data management , navigate to Failover groups and then choose Add group to open the
Instance Failover Group page.
4. On the Instance Failover Group page, type the name of your failover group, such as
failovergrouptutorial . Then choose the secondary managed instance, such as sql-mi-secondary , from
the drop-down. Select Create to create your failover group.

5. Once failover group deployment is complete, you will be taken back to the Failover group page.

Test failover
In this step, you will fail your failover group over to the secondary server, and then fail back using the Azure
portal.

Portal
PowerShell

Test failover using the Azure portal.


1. Navigate to your secondary managed instance within the Azure portal and select Instance Failover
Groups under settings.
2. Note managed instances in the primary and in the secondary role.
3. Select Failover and then select Yes on the warning about TDS sessions being disconnected.
4. Review that managed instance is the primary and which managed instance is the secondary. If failover
succeeded, the two instances should have switched roles.

5. Go to the new secondary managed instance and select Failover once again to fail the primary instance
back to the primary role.

Clean up resources
Clean up resources by first deleting the managed instances, then the virtual cluster, then any remaining
resources, and finally the resource group. Failover group will be automatically deleted when you delete any of
the two instances.
Portal
PowerShell

1. Navigate to your resource group in the Azure portal.


2. Select the managed instance(s) and then select Delete . Type yes in the text box to confirm you want to
delete the resource and then select Delete . This process may take some time to complete in the background,
and until it's done, you will not be able to delete the virtual cluster or any other dependent resources.
Monitor the deletion in the Activity tab to confirm your managed instance has been deleted.
3. Once the managed instance is deleted, delete the virtual cluster by selecting it in your resource group, and
then choosing Delete . Type yes in the text box to confirm you want to delete the resource and then select
Delete .
4. Delete any remaining resources. Type yes in the text box to confirm you want to delete the resource and
then select Delete .
5. Delete the resource group by selecting Delete resource group , typing in the name of the resource group,
myResourceGroup , and then selecting Delete .

Full script
PowerShell
Portal

There are no scripts available for the Azure portal.

Next steps
In this tutorial, you configured a failover group between two managed instances. You learned how to:
Create a primary managed instance.
Create a secondary managed instance as part of a failover group.
Test failover.
Advance to the next quickstart on how to connect to SQL Managed Instance, and how to restore a database to
SQL Managed Instance:
Connect to SQL Managed Instance Restore a database to SQL Managed Instance
Tutorial: Migrate Windows users and groups in a
SQL Server instance to Azure SQL Managed
Instance using T-SQL DDL syntax
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article takes you through the process of migrating your on-premises Windows users and groups in your
SQL Server to Azure SQL Managed Instance using T-SQL syntax.
In this tutorial, you learn how to:
Create logins for SQL Server
Create a test database for migration
Create logins, users, and roles
Backup and restore your database to SQL Managed Instance (MI)
Manually migrate users to MI using ALTER USER syntax
Testing authentication with the new mapped users

Prerequisites
To complete this tutorial, the following prerequisites apply:
The Windows domain is federated with Azure Active Directory (Azure AD).
Access to Active Directory to create users/groups.
An existing SQL Server in your on-premises environment.
An existing SQL Managed Instance. See Quickstart: Create a SQL Managed Instance.
A sysadmin in the SQL Managed Instance must be used to create Azure AD logins.
Create an Azure AD admin for SQL Managed Instance.
You can connect to your SQL Managed Instance within your network. See the following articles for additional
information:
Connect your application to Azure SQL Managed Instance
Quickstart: Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises
Configure public endpoint in Azure SQL Managed Instance

T-SQL DDL syntax


Below are the T-SQL DDL syntax used to support the migration of Windows users and groups from a SQL
Server instance to SQL Managed Instance with Azure AD authentication.

-- For individual Windows users with logins


ALTER USER [domainName\userName] WITH LOGIN = [loginName@domainName.com];

--For individual groups with logins


ALTER USER [domainName\groupName] WITH LOGIN=[groupName]

Arguments
domainName
Specifies the domain name of the user.
userName
Specifies the name of the user identified inside the database.
= loginName@domainName.com
Remaps a user to the Azure AD login
groupName
Specifies the name of the group identified inside the database.

Part 1: Create logins in SQL Server for Windows users and groups
IMPORTANT
The following syntax creates a user and a group login in your SQL Server. You'll need to make sure that the user and
group exist inside your Active Directory (AD) before executing the below syntax.

Users: testUser1, testGroupUser


Group: migration - testGroupUser needs to belong to the migration group in AD

The example below creates a login in SQL Server for an account named testUser1 under the domain aadsqlmi.

-- Sign into SQL Server as a sysadmin or a user that can create logins and databases

use master;
go

-- Create Windows login


create login [aadsqlmi\testUser1] from windows;
go;

/** Create a Windows group login which contains one user [aadsqlmi\testGroupUser].
testGroupUser will need to be added to the migration group in Active Directory
**/
create login [aadsqlmi\migration] from windows;
go;

-- Check logins were created


select * from sys.server_principals;
go;

Create a database for this test.

-- Create a database called [migration]


create database migration
go

Part 2: Create Windows users and groups, then add roles and
permissions
Use the following syntax to create the test user.
use migration;
go

-- Create Windows user [aadsqlmi\testUser1] with login


create user [aadsqlmi\testUser1] from login [aadsqlmi\testUser1];
go

Check the user permissions:

-- Check the user in the Metadata


select * from sys.database_principals;
go

-- Display the permissions – should only have CONNECT permissions


select user_name(grantee_principal_id), * from sys.database_permissions;
go

Create a role and assign your test user to this role:

-- Create a role with some permissions and assign the user to the role
create role UserMigrationRole;
go

grant CONNECT, SELECT, View DATABASE STATE, VIEW DEFINITION to UserMigrationRole;


go

alter role UserMigrationRole add member [aadsqlmi\testUser1];


go

Use the following query to display user names assigned to a specific role:

-- Display user name assigned to a specific role


SELECT DP1.name AS DatabaseRoleName,
isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;

Use the following syntax to create a group. Then add the group to the role db_owner .

-- Create Windows group


create user [aadsqlmi\migration] from login [aadsqlmi\migration];
go

-- ADD 'db_owner' role to this group


sp_addrolemember 'db_owner', 'aadsqlmi\migration';
go

--Check the db_owner role for 'aadsqlmi\migration' group


select is_rolemember('db_owner', 'aadsqlmi\migration')
go
-- Output ( 1 means YES)

Create a test table and add some data using the following syntax:
-- Create a table and add data
create table test ( a int, b int);
go

insert into test values (1,10)


go

-- Check the table values


select * from test;
go

Part 3: Backup and restore the individual user database to SQL


Managed Instance
Create a backup of the migration database using the article Copy Databases with Backup and Restore, or use the
following syntax:

use master;
go
backup database migration to disk = 'C:\Migration\migration.bak';
go

Follow our Quickstart: Restore a database to a SQL Managed Instance.

Part 4: Migrate users to SQL Managed Instance


Execute the ALTER USER command to complete the migration process on SQL Managed Instance.
1. Sign into your SQL Managed Instance using the Azure AD admin account for SQL Managed Instance.
Then create your Azure AD login in the SQL Managed Instance using the following syntax. For more
information, see Tutorial: SQL Managed Instance security in Azure SQL Database using Azure AD server
principals (logins).

use master
go

-- Create login for AAD user [testUser1@aadsqlmi.net]


create login [testUser1@aadsqlmi.net] from external provider
go

-- Create login for the Azure AD group [migration]. This group contains one user
[testGroupUser@aadsqlmi.net]
create login [migration] from external provider
go

--Check the two new logins


select * from sys.server_principals
go

2. Check your migration for the correct database, table, and principals.
-- Switch to the database migration that is already restored for MI
use migration;
go

--Check if the restored table test exist and contain a row


select * from test;
go

-- Check that the SQL on-premises Windows user/group exists


select * from sys.database_principals;
go
-- the old user aadsqlmi\testUser1 should be there
-- the old group aadsqlmi\migration should be there

3. Use the ALTER USER syntax to map the on-premises user to the Azure AD login.

/** Execute the ALTER USER command to alter the Windows user [aadsqlmi\testUser1]
to map to the Azure AD user testUser1@aadsqlmi.net
**/
alter user [aadsqlmi\testUser1] with login = [testUser1@aadsqlmi.net];
go

-- Check the principal


select * from sys.database_principals;
go
-- New user testUser1@aadsqlmi.net should be there instead
--Check new user permissions - should only have CONNECT permissions
select user_name(grantee_principal_id), * from sys.database_permissions;
go

-- Check a specific role


-- Display Db user name assigned to a specific role
SELECT DP1.name AS DatabaseRoleName,
isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;

4. Use the ALTER USER syntax to map the on-premises group to the Azure AD login.

/** Execute ALTER USER command to alter the Windows group [aadsqlmi\migration]
to the Azure AD group login [migration]
**/
alter user [aadsqlmi\migration] with login = [migration];
-- old group migration is changed to Azure AD migration group
go

-- Check the principal


select * from sys.database_principals;
go

--Check the group permission - should only have CONNECT permissions


select user_name(grantee_principal_id), * from sys.database_permissions;
go

--Check the db_owner role for 'aadsqlmi\migration' user


select is_rolemember('db_owner', 'migration')
go
-- Output 1 means 'YES'
Part 5: Testing Azure AD user or group authentication
Test authenticating to SQL Managed Instance using the user previously mapped to the Azure AD login using the
ALTER USER syntax.
1. Log into the federated VM using your Azure SQL Managed Instance subscription as aadsqlmi\testUser1

2. Using SQL Server Management Studio (SSMS), sign into your SQL Managed Instance using Active
Director y Integrated authentication, connecting to the database migration .
a. You can also sign in using the testUser1@aadsqlmi.net credentials with the SSMS option Active
Director y – Universal with MFA suppor t . However, in this case, you can't use the Single Sign On
mechanism and you must type a password. You won't need to use a federated VM to log in to your
SQL Managed Instance.
3. As part of the role member SELECT , you can select from the test table

Select * from test -- and see one row (1,10)

Test authenticating to a SQL Managed Instance using a member of a Windows group migration . The user
aadsqlmi\testGroupUser should have been added to the group migration before the migration.

1. Log into the federated VM using your Azure SQL Managed Instance subscription as
aadsqlmi\testGroupUser

2. Using SSMS with Active Director y Integrated authentication, connect to the Azure SQL Managed
Instance server and the database migration
a. You can also sign in using the testGroupUser@aadsqlmi.net credentials with the SSMS option Active
Director y – Universal with MFA suppor t . However, in this case, you can't use the Single Sign On
mechanism and you must type a password. You won't need to use a federated VM to log into your
SQL Managed Instance.
3. As part of the db_owner role, you can create a new table.

-- Create table named 'new' with a default schema


Create table dbo.new ( a int, b int)

NOTE
Due to a known design issue for Azure SQL Database, a create a table statement executed as a member of a group will fail
with the following error:

Msg 2760, Level 16, State 1, Line 4 The specified schema name "testGroupUser@aadsqlmi.net" either does
not exist or you do not have permission to use it.

The current workaround is to create a table with an existing schema in the case above <dbo.new>

Next steps
Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using DMS
Tutorial: Configure replication between two
managed instances
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Transactional replication allows you to replicate data from one database to another hosted on either SQL Server
or Azure SQL Managed Instance. SQL Managed Instance can be a publisher, distributor or subscriber in the
replication topology. See transactional replication configurations for available configurations.
Transactional replication is currently in public preview for SQL Managed Instance.
In this tutorial, you learn how to:
Configure a managed instance as a replication publisher and distributor.
Configure a managed instance as a replication subscriber.

This tutorial is intended for an experienced audience and assumes that the user is familiar with deploying and
connecting to both managed instances and SQL Server VMs within Azure.

NOTE
This article describes the use of transactional replication in Azure SQL Managed Instance. It is unrelated to failover
groups, an Azure SQL Managed Instance feature that allows you to create complete readable replicas of individual
instances. There are additional considerations when configuring transactional replication with failover groups.

Requirements
Configuring SQL Managed Instance to function as a publisher and/or a distributor requires:
That the publisher managed instance is on the same virtual network as the distributor and the subscriber, or
VPN gateways have been configured between the virtual networks of all three entities.
Connectivity uses SQL Authentication between replication participants.
An Azure storage account share for the replication working directory.
Port 445 (TCP outbound) is open in the security rules of NSG for the managed instances to access the Azure
file share. If you encounter the error
failed to connect to azure storage <storage account name> with os error 53 , you will need to add an
outbound rule to the NSG of the appropriate SQL Managed Instance subnet.

1 - Create a resource group


Use the Azure portal to create a resource group with the name SQLMI-Repl .

2 - Create managed instances


Use the Azure portal to create two SQL Managed Instances on the same virtual network and subnet. For
example, name the two managed instances:
sql-mi-pub (along with some characters for randomization)
sql-mi-sub (along with some characters for randomization)

You will also need to configure an Azure VM to connect to your managed instances.

3 - Create an Azure storage account


Create an Azure storage account for the working directory, and then create a file share within the storage
account.
Copy the file share path in the format of: \\storage-account-name.file.core.windows.net\file-share-name

Example: \\replstorage.file.core.windows.net\replshare

Copy the storage access keys in the format of:


DefaultEndpointsProtocol=https;AccountName=<Storage-Account-
Name>;AccountKey=****;EndpointSuffix=core.windows.net

Example:
DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTmmc8+EP53GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix

For more information, see Manage storage account access keys.

4 - Create a publisher database


Connect to your sql-mi-pub managed instance using SQL Server Management Studio and run the following
Transact-SQL (T-SQL) code to create your publisher database:

USE [master]
GO

CREATE DATABASE [ReplTran_PUB]


GO

USE [ReplTran_PUB]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO

USE [ReplTran_PUB]
GO

INSERT INTO ReplTest (ID, c1) VALUES (6, 'pub')


INSERT INTO ReplTest (ID, c1) VALUES (2, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (3, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (4, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (5, 'pub')
GO
SELECT * FROM ReplTest
GO

5 - Create a subscriber database


Connect to your sql-mi-sub managed instance using SQL Server Management Studio and run the following T-
SQL code to create your empty subscriber database:

USE [master]
GO

CREATE DATABASE [ReplTran_SUB]


GO

USE [ReplTran_SUB]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO

6 - Configure distribution
Connect to your sql-mi-pub managed instance using SQL Server Management Studio and run the following T-
SQL code to configure your distribution database.

USE [master]
GO

EXEC sp_adddistributor @distributor = @@ServerName;


EXEC sp_adddistributiondb @database = N'distribution';
GO

7 - Configure publisher to use distributor


On your publisher SQL Managed Instance sql-mi-pub , change the query execution to SQLCMD mode and run
the following code to register the new distributor with your publisher.
:setvar username loginUsedToAccessSourceManagedInstance
:setvar password passwordUsedToAccessSourceManagedInstance
:setvar file_storage "\\storage-account-name.file.core.windows.net\file-share-name"
-- example: file_storage "\\replstorage.file.core.windows.net\replshare"
:setvar file_storage_key "DefaultEndpointsProtocol=https;AccountName=<Storage-Account-
Name>;AccountKey=****;EndpointSuffix=core.windows.net"
-- example: file_storage_key
"DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTmmc8+EP53
GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix=core.windows.net"

USE [master]
EXEC sp_adddistpublisher
@publisher = @@ServerName,
@distribution_db = N'distribution',
@security_mode = 0,
@login = N'$(username)',
@password = N'$(password)',
@working_directory = N'$(file_storage)',
@storage_connection_string = N'$(file_storage_key)'; -- Remove this parameter for on-premises publishers

NOTE
Be sure to use only backslashes ( \ ) for the file_storage parameter. Using a forward slash ( / ) can cause an error when
connecting to the file share.

This script configures a local publisher on the managed instance, adds a linked server, and creates a set of jobs
for the SQL Server agent.

8 - Create publication and subscriber


Using SQLCMD mode, run the following T-SQL script to enable replication for your database, and configure
replication between your publisher, distributor, and subscriber.
-- Set variables
:setvar username sourceLogin
:setvar password sourcePassword
:setvar source_db ReplTran_PUB
:setvar publication_name PublishData
:setvar object ReplTest
:setvar schema dbo
:setvar target_server "sql-mi-sub.wdec33262scj9dr27.database.windows.net"
:setvar target_username targetLogin
:setvar target_password targetPassword
:setvar target_db ReplTran_SUB

-- Enable replication for your source database


USE [$(source_db)]
EXEC sp_replicationdboption
@dbname = N'$(source_db)',
@optname = N'publish',
@value = N'true';

-- Create your publication


EXEC sp_addpublication
@publication = N'$(publication_name)',
@status = N'active';

-- Configure your log reader agent


EXEC sp_changelogreader_agent
@publisher_security_mode = 0,
@publisher_login = N'$(username)',
@publisher_password = N'$(password)',
@job_login = N'$(username)',
@job_password = N'$(password)';

-- Add the publication snapshot


EXEC sp_addpublication_snapshot
@publication = N'$(publication_name)',
@frequency_type = 1,
@publisher_security_mode = 0,
@publisher_login = N'$(username)',
@publisher_password = N'$(password)',
@job_login = N'$(username)',
@job_password = N'$(password)';

-- Add the ReplTest table to the publication


EXEC sp_addarticle
@publication = N'$(publication_name)',
@type = N'logbased',
@article = N'$(object)',
@source_object = N'$(object)',
@source_owner = N'$(schema)';

-- Add the subscriber


EXEC sp_addsubscription
@publication = N'$(publication_name)',
@subscriber = N'$(target_server)',
@destination_db = N'$(target_db)',
@subscription_type = N'Push';

-- Create the push subscription agent


EXEC sp_addpushsubscription_agent
@publication = N'$(publication_name)',
@subscriber = N'$(target_server)',
@subscriber_db = N'$(target_db)',
@subscriber_security_mode = 0,
@subscriber_login = N'$(target_username)',
@subscriber_password = N'$(target_password)',
@job_login = N'$(username)',
@job_password = N'$(password)';

-- Initialize the snapshot


EXEC sp_startpublication_snapshot
@publication = N'$(publication_name)';

9 - Modify agent parameters


Azure SQL Managed Instance is currently experiencing some backend issues with connectivity with the
replication agents. While this issue is being addressed, the workaround is to increase the login timeout value for
the replication agents.
Run the following T-SQL command on the publisher to increase the login timeout:

-- Increase login timeout to 150s


update msdb..sysjobsteps set command = command + N' -LoginTimeout 150'
where subsystem in ('Distribution','LogReader','Snapshot') and command not like '%-LoginTimeout %'

Run the following T-SQL command again to set the login timeout back to the default value, should you need to
do so:

-- Increase login timeout to 30


update msdb..sysjobsteps set command = command + N' -LoginTimeout 30'
where subsystem in ('Distribution','LogReader','Snapshot') and command not like '%-LoginTimeout %'
Restart all three agents to apply these changes.

10 - Test replication
Once replication has been configured, you can test it by inserting new items on the publisher and watching the
changes propagate to the subscriber.
Run the following T-SQL snippet to view the rows on the subscriber:

select * from dbo.ReplTest

Run the following T-SQL snippet to insert additional rows on the publisher, and then check the rows again on
the subscriber.

INSERT INTO ReplTest (ID, c1) VALUES (15, 'pub')

Clean up resources
To drop the publication, run the following T-SQL command:

-- Drops the publication


USE [ReplTran_PUB]
EXEC sp_droppublication @publication = N'PublishData'
GO

To remove the replication option from the database, run the following T-SQL command:

-- Disables publishing of the database


USE [ReplTran_PUB]
EXEC sp_removedbreplication
GO

To disable publishing and distribution, run the following T-SQL command:

-- Drops the distributor


USE [master]
EXEC sp_dropdistributor @no_checks = 1
GO

You can clean up your Azure resources by deleting the SQL Managed Instance resources from the resource
group and then deleting the resource group SQLMI-Repl .

Next steps
You can also learn more information about transactional replication with Azure SQL Managed Instance or learn
to configure replication between a SQL Managed Instance publisher/distributor and a SQL on Azure VM
subscriber.
Tutorial: Configure transactional replication between
Azure SQL Managed Instance and SQL Server
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Transactional replication allows you to replicate data from one database to another hosted on either SQL Server
or Azure SQL Managed Instance. SQL Managed Instance can be a publisher, distributor or subscriber in the
replication topology. See transactional replication configurations for available configurations.
Transactional replication is currently in public preview for SQL Managed Instance.
In this tutorial, you learn how to:
Configure a managed instance as a replication publisher.
Configure a managed instance as a replication distributor.
Configure SQL Server as a subscriber.

This tutorial is intended for an experienced audience and assumes that the user is familiar with deploying and
connecting to both managed instances and SQL Server VMs within Azure.

NOTE
This article describes the use of transactional replication in Azure SQL Managed Instance. It is unrelated to failover groups,
an Azure SQL Managed Instance feature that allows you to create complete readable replicas of individual instances.
There are additional considerations when configuring transactional replication with failover groups.

Prerequisites
To complete the tutorial, make sure you have the following prerequisites:
An Azure subscription.
Experience with deploying two managed instances within the same virtual network.
A SQL Server subscriber, either on-premises or on an Azure VM. This tutorial uses an Azure VM.
SQL Server Management Studio (SSMS) 18.0 or greater.
The latest version of Azure PowerShell.
Ports 445 and 1433 allow SQL traffic on both the Azure firewall and the Windows firewall.

Create the resource group


Use the following PowerShell code snippet to create a new resource group:

# set variables
$ResourceGroupName = "SQLMI-Repl"
$Location = "East US 2"

# Create a new resource group


New-AzResourceGroup -Name $ResourceGroupName -Location $Location

Create two managed instances


Create two managed instances within this new resource group using the Azure portal.
The name of the publisher managed instance should be sql-mi-publisher (along with a few characters
for randomization), and the name of the virtual network should be vnet-sql-mi-publisher .
The name of the distributor managed instance should be sql-mi-distributor (along with a few
characters for randomization), and it should be in the same virtual network as the publisher managed
instance.
For more information about creating a managed instance, see Create a managed instance in the portal.

NOTE
For the sake of simplicity, and because it is the most common configuration, this tutorial suggests placing the distributor
managed instance within the same virtual network as the publisher. However, it's possible to create the distributor in a
separate virtual network. To do so, you will need to configure VNet peering between the virtual networks of the publisher
and distributor, and then configure VNet peering between the virtual networks of the distributor and subscriber.

Create a SQL Server VM


Create a SQL Server virtual machine using the Azure portal. The SQL Server virtual machine should have the
following characteristics:
Name: sql-vm-sub
Image: SQL Server 2016 or greater
Resource group: the same as the managed instance
Virtual network: sql-vm-sub-vnet

For more information about deploying a SQL Server VM to Azure, see Quickstart: Create a SQL Server VM.

Configure VNet peering


Configure VNet peering to enable communication between the virtual network of the two managed instances,
and the virtual network of SQL Server. To do so, use this PowerShell code snippet:

# Set variables
$SubscriptionId = '<SubscriptionID>'
$resourceGroup = 'SQLMI-Repl'
$pubvNet = 'sql-mi-publisher-vnet'
$subvNet = 'sql-vm-sub-vnet'
$pubsubName = 'Pub-to-Sub-Peer'
$subpubName = 'Sub-to-Pub-Peer'

$virtualNetwork1 = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Name $pubvNet

$virtualNetwork2 = Get-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Name $subvNet

# Configure VNet peering from publisher to subscriber


Add-AzVirtualNetworkPeering `
-Name $pubsubName `
-VirtualNetwork $virtualNetwork1 `
-RemoteVirtualNetworkId $virtualNetwork2.Id

# Configure VNet peering from subscriber to publisher


Add-AzVirtualNetworkPeering `
-Name $subpubName `
-VirtualNetwork $virtualNetwork2 `
-RemoteVirtualNetworkId $virtualNetwork1.Id

# Check status of peering on the publisher VNet; should say connected


Get-AzVirtualNetworkPeering `
-ResourceGroupName $resourceGroup `
-VirtualNetworkName $pubvNet `
| Select PeeringState

# Check status of peering on the subscriber VNet; should say connected


Get-AzVirtualNetworkPeering `
-ResourceGroupName $resourceGroup `
-VirtualNetworkName $subvNet `
| Select PeeringState

Once VNet peering is established, test connectivity by launching SQL Server Management Studio (SSMS) on
SQL Server and connecting to both managed instances. For more information on connecting to a managed
instance using SSMS, see Use SSMS to connect to SQL Managed Instance.

Create a private DNS zone


A private DNS zone allows DNS routing between the managed instances and SQL Server.
Create a private DNS zone
1. Sign into the Azure portal.
2. Select Create a resource to create a new Azure resource.
3. Search for private dns zone on Azure Marketplace.
4. Choose the Private DNS zone resource published by Microsoft and then select Create to create the
DNS zone.
5. Choose the subscription and resource group from the drop-down.
6. Provide an arbitrary name for your DNS zone, such as repldns.com .

7. Select Review + create . Review the parameters for your private DNS zone and then select Create to
create your resource.
Create an A record
1. Go to your new Private DNS zone and select Over view .
2. Select + Record set to create a new A record.
3. Provide the name of your SQL Server VM as well as the private internal IP address.

4. Select OK to create the A record.


Link the virtual network
1. Go to your new Private DNS zone and select Vir tual network links .
2. Select + Add .
3. Provide a name for the link, such as Pub-link .
4. Select your subscription from the drop-down and then select the virtual network for your publisher
managed instance.
5. Check the box next to Enable auto registration .
6. Select OK to link your virtual network.
7. Repeat these steps to add a link for the subscriber virtual network, with a name such as Sub-link .

Create an Azure storage account


Create an Azure storage account for the working directory, and then create a file share within the storage
account.
Copy the file share path in the format of: \\storage-account-name.file.core.windows.net\file-share-name

Example: \\replstorage.file.core.windows.net\replshare

Copy the storage access key connection string in the format of:
DefaultEndpointsProtocol=https;AccountName=<Storage-Account-
Name>;AccountKey=****;EndpointSuffix=core.windows.net

Example:
DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTmmc8+EP53GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix

For more information, see Manage storage account access keys.

Create a database
Create a new database on the publisher managed instance. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-publisher managed instance.
3. Open a New Quer y window and execute the following T-SQL query to create the database.

-- Create the databases


USE [master]
GO

-- Drop database if it exists


IF EXISTS (SELECT * FROM sys.sysdatabases WHERE name = 'ReplTutorial')
BEGIN
DROP DATABASE ReplTutorial
END
GO

-- Create new database


CREATE DATABASE [ReplTutorial]
GO

-- Create table
USE [ReplTutorial]
GO
CREATE TABLE ReplTest (
ID INT NOT NULL PRIMARY KEY,
c1 VARCHAR(100) NOT NULL,
dt1 DATETIME NOT NULL DEFAULT getdate()
)
GO

-- Populate table with data


USE [ReplTutorial]
GO

INSERT INTO ReplTest (ID, c1) VALUES (6, 'pub')


INSERT INTO ReplTest (ID, c1) VALUES (2, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (3, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (4, 'pub')
INSERT INTO ReplTest (ID, c1) VALUES (5, 'pub')
GO
SELECT * FROM ReplTest
GO

Configure distribution
Once connectivity is established and you have a sample database, you can configure distribution on your
sql-mi-distributor managed instance. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-distributor managed instance.
3. Open a New Quer y window and run the following Transact-SQL code to configure distribution on the
distributor managed instance:

EXEC sp_adddistributor @distributor = 'sql-mi-distributor.b6bf57.database.windows.net', @password =


'<distributor_admin_password>'

EXEC sp_adddistributiondb @database = N'distribution'

EXEC sp_adddistpublisher @publisher = 'sql-mi-publisher.b6bf57.database.windows.net', -- primary


publisher
@distribution_db = N'distribution',
@security_mode = 0,
@login = N'azureuser',
@password = N'<publisher_password>',
@working_directory = N'\\replstorage.file.core.windows.net\replshare',
@storage_connection_string = N'<storage_connection_string>'
-- example: @storage_connection_string =
N'DefaultEndpointsProtocol=https;AccountName=replstorage;AccountKey=dYT5hHZVu9aTgIteGfpYE64cfis0mpKTm
mc8+EP53GxuRg6TCwe5eTYWrQM4AmQSG5lb3OBskhg==;EndpointSuffix=core.windows.net'

NOTE
Be sure to use only backslashes ( \ ) for the @working_directory parameter. Using a forward slash ( / ) can cause
an error when connecting to the file share.

4. Connect to the sql-mi-publisher managed instance.


5. Open a New Quer y window and run the following Transact-SQL code to register the distributor at the
publisher:

Use MASTER
EXEC sys.sp_adddistributor @distributor = 'sql-mi-distributor.b6bf57.database.windows.net', @password
= '<distributor_admin_password>'

Create the publication


Once distribution has been configured, you can now create the publication. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-publisher managed instance.
3. In Object Explorer , expand the Replication node and right-click the Local Publication folder. Select
New Publication....
4. Select Next to move past the welcome page.
5. On the Publication Database page, select the ReplTutorial database you created previously. Select
Next .
6. On the Publication type page, select Transactional publication . Select Next .
7. On the Ar ticles page, check the box next to Tables . Select Next .
8. On the Filter Table Rows page, select Next without adding any filters.
9. On the Snapshot Agent page, check the box next to Create snapshot immediately and keep the
snapshot available to initialize subscriptions . Select Next .
10. On the Agent Security page, select Security Settings.... Provide SQL Server login credentials to use
for the Snapshot Agent, and to connect to the publisher. Select OK to close the Snapshot Agent
Security page. Select Next .
11. On the Wizard Actions page, choose to Create the publication and (optionally) choose to Generate
a script file with steps to create the publication if you want to save this script for later.
12. On the Complete the Wizard page, name your publication ReplTest and select Next to create your
publication.
13. Once your publication has been created, refresh the Replication node in Object Explorer and expand
Local Publications to see your new publication.

Create the subscription


Once the publication has been created, you can create the subscription. To do so, follow these steps:
1. Launch SQL Server Management Studio on SQL Server.
2. Connect to the sql-mi-publisher managed instance.
3. Open a New Quer y window and run the following Transact-SQL code to add the subscription and
distribution agent. Use the DNS as part of the subscriber name.

use [ReplTutorial]
exec sp_addsubscription
@publication = N'ReplTest',
@subscriber = N'sql-vm-sub.repldns.com', -- include the DNS configured in the private DNS zone
@destination_db = N'ReplSub',
@subscription_type = N'Push',
@sync_type = N'automatic',
@article = N'all',
@update_mode = N'read only',
@subscriber_type = 0

exec sp_addpushsubscription_agent
@publication = N'ReplTest',
@subscriber = N'sql-vm-sub.repldns.com', -- include the DNS configured in the private DNS zone
@subscriber_db = N'ReplSub',
@job_login = N'azureuser',
@job_password = '<Complex Password>',
@subscriber_security_mode = 0,
@subscriber_login = N'azureuser',
@subscriber_password = '<Complex Password>',
@dts_package_location = N'Distributor'
GO

Test replication
Once replication has been configured, you can test it by inserting new items on the publisher and watching the
changes propagate to the subscriber.
Run the following T-SQL snippet to view the rows on the subscriber:

Use ReplSub
select * from dbo.ReplTest

Run the following T-SQL snippet to insert additional rows on the publisher, and then check the rows again on
the subscriber.
Use ReplTutorial
INSERT INTO ReplTest (ID, c1) VALUES (15, 'pub')

Clean up resources
1. Navigate to your resource group in the Azure portal.
2. Select the managed instance(s) and then select Delete . Type yes in the text box to confirm you want to
delete the resource and then select Delete . This process may take some time to complete in the background,
and until it's done, you will not be able to delete the virtual cluster or any other dependent resources.
Monitor the delete in the Activity tab to confirm your managed instance has been deleted.
3. Once the managed instance is deleted, delete the virtual cluster by selecting it in your resource group, and
then choosing Delete . Type yes in the text box to confirm you want to delete the resource and then select
Delete .
4. Delete any remaining resources. Type yes in the text box to confirm you want to delete the resource and
then select Delete .
5. Delete the resource group by selecting Delete resource group , typing in the name of the resource group,
myResourceGroup , and then selecting Delete .

Known errors
Windows logins are not supported
Exception Message: Windows logins are not supported in this version of SQL Server.

The agent was configured with a Windows login and needs to use a SQL Server login instead. Use the Agent
Security page of the Publication proper ties to change the login credentials to a SQL Server login.
Failed to connect to Azure Storage
Connecting to Azure Files Storage '\\replstorage.file.core.windows.net\replshare' Failed to connect to Azure
Storage '' with OS error: 53.

2019-11-19 02:21:05.07 Obtained Azure Storage Connection String for replstorage 2019-11-19 02:21:05.07
Connecting to Azure Files Storage '\replstorage.file.core.windows.net\replshare' 2019-11-19 02:21:31.21 Failed
to connect to Azure Storage '' with OS error: 53.
This is likely because port 445 is closed in either the Azure firewall, the Windows firewall, or both.
Connecting to Azure Files Storage '\\replstorage.file.core.windows.net\replshare' Failed to connect to Azure
Storage '' with OS error: 55.

Using a forward slash instead of backslash in the file path for the file share can cause this error.
This is okay: \\replstorage.file.core.windows.net\replshare
This can cause an OS 55 error: '\\replstorage.file.core.windows.net/replshare'
Could not connect to Subscriber
The process could not connect to Subscriber 'SQL-VM-SUB Could not open a connection to SQL Server [53].
A network-related or instance-specific error has occurred while establishing a connection to SQL Server.
Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to
allow remote connections.

Possible solutions:
Ensure port 1433 is open.
Ensure TCP/IP is enabled on the subscriber.
Confirm the DNS name was used when creating the subscriber.
Verify that your virtual networks are correctly linked in the private DNS zone.
Verify your A record is configured correctly.
Verify your VNet peering is configured correctly.
No publications to which you can subscribe
When you're adding a new subscription using the New Subscription wizard, on the Publication page, you
may find that there are no databases and publications listed as available options, and you might see the
following error message:
There are no publications to which you can subscribe, either because this server has no publications or
because you do not have sufficient privileges to access the publications.

While it's possible that this error message is accurate, and there really aren't publications available on the
publisher you connected to, or you're lacking sufficient permissions, this error could also be caused by an older
version of SQL Server Management Studio. Try upgrading to SQL Server Management Studio 18.0 or greater to
rule this out as a root cause.

Next steps
Enable security features
See the What is Azure SQL Managed Instance? article for a comprehensive list of ways to secure your database.
The following security features are discussed:
SQL Managed Instance auditing
Always Encrypted
Threat detection
Dynamic data masking
Row-level security
Transparent data encryption (TDE)
SQL Managed Instance capabilities
For a complete overview of managed instance capabilities, see:
SQL Managed Instance capabilities
Migration guide: IBM Db2 to Azure SQL Managed
Instance
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This guide teaches you to migrate your IBM Db2 databases to Azure SQL Managed Instance, by using the SQL
Server Migration Assistant for Db2.
For other migration guides, see Azure Database Migration Guides.

Prerequisites
To migrate your Db2 database to SQL Managed Instance, you need:
To verify that your source environment is supported.
To download SQL Server Migration Assistant (SSMA) for Db2.
A target instance of Azure SQL Managed Instance.
Connectivity and sufficient permissions to access both source and target.

Pre-migration
After you have met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your migration.
Assess and convert
Create an assessment by using SQL Server Migration Assistant.
To create an assessment, follow these steps:
1. Open SSMA for Db2.
2. Select File > New Project .
3. Provide a project name and a location to save your project. Then select Azure SQL Managed Instance as
the migration target from the drop-down list, and select OK .
4. On Connect to Db2 , enter values for the Db2 connection details.

5. Right-click the Db2 schema you want to migrate, and then choose Create repor t . This will generate an
HTML report. Alternatively, you can choose Create repor t from the navigation bar after selecting the
schema.
6. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Db2 objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example: drive:\<username>\Documents\SSMAProjects\MyDb2Migration\report\report_<date> .

Validate data types


Validate the default data type mappings, and change them based on requirements if necessary. To do so, follow
these steps:
1. Select Tools from the menu.
2. Select Project Settings .
3. Select the Type mappings tab.
4. You can change the type mapping for each table by selecting the table in the Db2 Metadata Explorer .
Convert schema
To convert the schema, follow these steps:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then choose Add
statements .
2. Select Connect to Azure SQL Managed Instance .
a. Enter connection details to connect to Azure SQL Managed Instance.
b. Choose your target database from the drop-down list, or provide a new name, in which case a
database will be created on the target server.
c. Provide authentication details.
d. Select Connect .

3. Right-click the schema, and then choose Conver t Schema . Alternatively, you can choose Conver t
Schema from the top navigation bar after selecting your schema.
4. After the conversion completes, compare and review the structure of the schema to identify potential
problems. Address the problems based on the recommendations.

5. In the Output pane, select Review results . In the Error list pane, review errors.
6. Save the project locally for an offline schema remediation exercise. From the File menu, select Save
Project . This gives you an opportunity to evaluate the source and target schemas offline, and perform
remediation before you can publish the schema to SQL Managed Instance.

Migrate
After you have completed assessing your databases and addressing any discrepancies, the next step is to
execute the migration process.
To publish your schema and migrate your data, follow these steps:
1. Publish the schema. In Azure SQL Managed Instance Metadata Explorer , from the Databases node,
right-click the database. Then select Synchronize with Database .
2. Migrate the data. Right-click the database or object you want to migrate in Db2 Metadata Explorer , and
choose Migrate data . Alternatively, you can select Migrate Data from the navigation bar. To migrate
data for an entire database, select the check box next to the database name. To migrate data from
individual tables, expand the database, expand Tables , and then select the check box next to the table. To
omit data from individual tables, clear the check box.
3. Provide connection details for both Db2 and SQL Managed Instance.
4. After migration completes, view the Data Migration Repor t .

5. Connect to your instance of Azure SQL Managed Instance by using SQL Server Management Studio.
Validate the migration by reviewing the data and schema:
Post-migration
After the migration is complete, you need to go through a series of post-migration tasks to ensure that
everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this will in some cases require changes to the applications.
Perform tests
Testing consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you have defined.
2. Set up the test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.

Advanced features
Be sure to take advantage of the advanced cloud-based features offered by Azure SQL Managed Instance, such
as built-in high availability, threat detection, and monitoring and tuning your workload.
Some SQL Server features are only available when the database compatibility level is changed to the latest
compatibility level.

Migration assets
For additional assistance, see the following resources, which were developed in support of a real-world
migration project engagement:
A SSET DESC RIP T IO N

Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application/database remediation level
for a given workload. It offers simple, one-click calculation
and report generation that helps to accelerate large estate
assessments by providing and automated and uniform
target platform decision process.

Db2 zOS data assets discovery and assessment package After running the SQL script on a database, you can export
the results to a file on the file system. Several file formats are
supported, including *.csv, so that you can capture the
results in external tools such as spreadsheets. This method
can be useful if you want to easily share results with teams
that do not have the workbench installed.

IBM Db2 LUW inventory scripts and artifacts This asset includes a SQL query that hits IBM Db2 LUW
version 11.1 system tables and provides a count of objects
by schema and object type, a rough estimate of "raw data"
in each schema, and the sizing of tables in each schema, with
results stored in a CSV format.

IBM Db2 to SQL MI - Database Compare utility The Database Compare utility is a Windows console
application that you can use to verify that the data is
identical both on source and target platforms. You can use
the tool to efficiently compare data down to the row or
column level in all or selected tables, rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
For Microsoft and third-party services and tools to assist you with various database and data migration
scenarios, see Service and tools for data migration.
To learn more about Azure SQL Managed Instance, see:
An overview of SQL Managed Instance
Azure total cost of ownership calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit.
For details on how to perform data access layer A/B testing, see Database Experimentation Assistant.
Migration guide: Oracle to Azure SQL Managed
Instance
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This guide teaches you to migrate your Oracle schemas to Azure SQL Managed Instance by using SQL Server
Migration Assistant for Oracle.
For other migration guides, see Azure Database Migration Guides.

Prerequisites
Before you begin migrating your Oracle schema to SQL Managed Instance:
Verify your source environment is supported.
Download SSMA for Oracle.
Have a SQL Managed Instance target.
Obtain the necessary permissions for SSMA for Oracle and provider.

Pre-migration
After you've met the prerequisites, you're ready to discover the topology of your environment and assess the
feasibility of your migration. This part of the process involves conducting an inventory of the databases that you
need to migrate, assessing those databases for potential migration issues or blockers, and then resolving any
items you might have uncovered.
Assess
By using SSMA for Oracle, you can review database objects and data, assess databases for migration, migrate
database objects to SQL Managed Instance, and then finally migrate data to the database.
To create an assessment:
1. Open SSMA for Oracle.
2. Select File , and then select New Project .
3. Enter a project name and a location to save your project. Then select Azure SQL Managed Instance as
the migration target from the drop-down list and select OK .
4. Select Connect to Oracle . Enter values for Oracle connection details in the Connect to Oracle dialog
box.

5. Select the Oracle schemas you want to migrate.


6. In Oracle Metadata Explorer , right-click the Oracle schema you want to migrate and then select
Create Repor t to generate an HTML report. Instead, you can select a database and then select the
Create Repor t tab.

7. Review the HTML report to understand conversion statistics and any errors or warnings. You can also
open the report in Excel to get an inventory of Oracle objects and the effort required to perform schema
conversions. The default location for the report is in the report folder within SSMAProjects.
For example, see
drive:\<username>\Documents\SSMAProjects\MyOracleMigration\report\report_2020_11_12T02_47_55\ .
Validate the data types
Validate the default data type mappings and change them based on requirements if necessary. To do so, follow
these steps:
1. In SSMA for Oracle, select Tools , and then select Project Settings .
2. Select the Type Mapping tab.

3. You can change the type mapping for each table by selecting the table in Oracle Metadata Explorer .
Convert the schema
To convert the schema:
1. (Optional) Add dynamic or ad-hoc queries to statements. Right-click the node, and then select Add
statements .
2. Select the Connect to Azure SQL Managed Instance tab.
a. Enter connection details to connect your database in SQL Database Managed Instance .
b. Select your target database from the drop-down list, or enter a new name, in which case a database
will be created on the target server.
c. Enter authentication details, and select Connect .

3. In Oracle Metadata Explorer , right-click the Oracle schema and then select Conver t Schema . Or, you
can select your schema and then select the Conver t Schema tab.

4. After the conversion finishes, compare and review the converted objects to the original objects to identify
potential problems and address them based on the recommendations.
5. Compare the converted Transact-SQL text to the original code, and review the recommendations.

6. In the output pane, select Review results and review the errors in the Error List pane.
7. Save the project locally for an offline schema remediation exercise. On the File menu, select Save
Project . This step gives you an opportunity to evaluate the source and target schemas offline and
perform remediation before you publish the schema to SQL Managed Instance.

Migrate
After you've completed assessing your databases and addressing any discrepancies, the next step is to run the
migration process. Migration involves two steps: publishing the schema and migrating the data.
To publish your schema and migrate your data:
1. Publish the schema by right-clicking the database from the Databases node in Azure SQL Managed
Instance Metadata Explorer and selecting Synchronize with Database .
2. Review the mapping between your source project and your target.

3. Migrate the data by right-clicking the schema or object you want to migrate in Oracle Metadata
Explorer and selecting Migrate Data . Or, you can select the Migrate Data tab. To migrate data for an
entire database, select the check box next to the database name. To migrate data from individual tables,
expand the database, expand Tables , and then select the checkboxes next to the tables. To omit data from
individual tables, clear the checkboxes.
4. Enter connection details for both Oracle and SQL Managed Instance.
5. After the migration is completed, view the Data Migration Repor t .

6. Connect to your instance of SQL Managed Instance by using SQL Server Management Studio, and
validate the migration by reviewing the data and schema.
Or, you can also use SQL Server Integration Services to perform the migration. To learn more, see:
Getting started with SQL Server Integration Services
SQL Server Integration Services for Azure and Hybrid Data Movement

Post-migration
After you've successfully completed the migration stage, you need to complete a series of post-migration tasks
to ensure that everything is functioning as smoothly and efficiently as possible.
Remediate applications
After the data is migrated to the target environment, all the applications that formerly consumed the source
need to start consuming the target. Accomplishing this step will require changes to the applications in some
cases.
The Data Access Migration Toolkit is an extension for Visual Studio Code that allows you to analyze your Java
source code and detect data access API calls and queries. The toolkit provides you with a single-pane view of
what needs to be addressed to support the new database back end. To learn more, see the Migrate our Java
application from Oracle blog post.
Perform tests
The test approach to database migration consists of the following activities:
1. Develop validation tests : To test the database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up a test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run validation tests against the source and the target, and then analyze the results.
4. Run performance tests : Run performance tests against the source and the target, and then analyze and
compare the results.
Validate migrated objects
Microsoft SQL Server Migration Assistant for Oracle Tester (SSMA Tester) allows you to test migrated database
objects. The SSMA Tester is used to verify that converted objects behave in the same way.
Create test case
1. Open SSMA for Oracle, select Tester followed by New Test Case .

2. On the Test Case wizard, provide the following information:


Name: Enter the name to identify the test case.
Creation date: Today's current date, defined automatically.
Last Modified date: Filled in automatically, should not be changed.
Description: Enter any additional information to identify the purpose of the test case.

3. Select the objects that are part of the test case from the Oracle object tree located in the left side.
In this example, stored procedure ADD_REGION and table REGION is selected.
To learn more, see Selecting and configuring objects to test.
4. Next, select the tables, foreign keys and other dependent objects from the Oracle object tree in the left
window.

To learn more, see Selecting and configuring affected objects.


5. Review the evaluation sequence of objects. Change the order by clicking the buttons in the grid.
6. Finalize the test case by reviewing the information provided in the previous steps.Configure the test
execution options based on the test scenario.

For more information on test case settings,Finishing test case preparation


7. Click on finish to create the test case.
Run test case
When SSMA Tester runs a test case, the test engine executes the objects selected for testing and generates a
verification report.
1. Select the test case from test repository and then click run.

2. Review the launch test case and click run.


3. Next, provide Oracle source credentials. Click connect after entering the credentials.

4. Provide target SQL Server credentials and click connect.


On success, the test case moves to initialization stage.
5. A real-time progress bar shows the execution status of the test run.

6. Review the report after the test is completed. The report provides the statistics, any errors during the test
run and a detail report.
7. Click details to get more information.
Example of positive data validation.

Example of failed data validation.


Optimize
The post-migration phase is crucial for reconciling any data accuracy issues, verifying completeness, and
addressing performance issues with the workload.

NOTE
For more information about these issues and the steps to mitigate them, see the Post-migration validation and
optimization guide.

Migration assets
For more assistance with completing this migration scenario, see the following resources. They were developed
in support of a real-world migration project engagement.

T IT L E/ L IN K DESC RIP T IO N

Data Workload Assessment Model and Tool This tool provides suggested "best fit" target platforms,
cloud readiness, and application or database remediation
level for a given workload. It offers simple, one-click
calculation and report generation that helps to accelerate
large estate assessments by providing an automated and
uniform target platform decision process.

Oracle Inventory Script Artifacts This asset includes a PL/SQL query that hits Oracle system
tables and provides a count of objects by schema type,
object type, and status. It also provides a rough estimate of
raw data in each schema and the sizing of tables in each
schema, with results stored in a CSV format.

Automate SSMA Oracle Assessment Collection & This set of resources uses a .csv file as entry (sources.csv in
Consolidation the project folders) to produce the xml files that are needed
to run an SSMA assessment in console mode. The source.csv
is provided by the customer based on an inventory of
existing Oracle instances. The output files are
AssessmentReportGeneration_source_1.xml,
ServersConnectionFile.xml, and VariableValueFile.xml.
T IT L E/ L IN K DESC RIP T IO N

Oracle to SQL MI - Database Compare utility SSMA for Oracle Tester is the recommended tool to
automatically validate the database object conversion and
data migration, and it's a superset of Database Compare
functionality.

If you're looking for an alternative data validation option,


you can use the Database Compare utility to compare data
down to the row or column level in all or selected tables,
rows, and columns.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
For a matrix of Microsoft and third-party services and tools that are available to assist you with various
database and data migration scenarios and specialty tasks, see Services and tools for data migration.
To learn more about SQL Managed Instance, see:
An overview of Azure SQL Managed Instance
Azure Total Cost of Ownership (TCO) Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads for migration to Azure
For video content, see:
Overview of the migration journey and the tools and services recommended for performing
assessment and migration
Migration overview: SQL Server to Azure SQL
Managed Instance
7/12/2022 • 17 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Learn about the options and considerations for migrating your SQL Server databases to Azure SQL Managed
Instance.
You can migrate SQL Server databases running on-premises or on:
SQL Server on Azure Virtual Machines.
Amazon Web Services (AWS) Elastic Compute Cloud (EC2).
AWS Relational Database Service (RDS).
Compute Engine in Google Cloud Platform (GCP).
Cloud SQL for SQL Server in GCP.
For other migration guides, see Database Migration.

Overview
Azure SQL Managed Instance is a recommended target option for SQL Server workloads that require a fully
managed service without having to manage virtual machines or their operating systems. SQL Managed Instance
enables you to move your on-premises applications to Azure with minimal application or database changes. It
offers complete isolation of your instances with native virtual network support.
Be sure to review the SQL Server database engine features available in Azure SQL Managed Instance to validate
the supportability of your migration target.

Considerations
The key factors to consider when you're evaluating migration options are:
Number of servers and databases
Size of databases
Acceptable business downtime during the migration process
One of the key benefits of migrating your SQL Server databases to SQL Managed Instance is that you can
choose to migrate the entire instance or just a subset of individual databases. Carefully plan to include the
following in your migration process:
All databases that need to be colocated to the same instance
Instance-level objects required for your application, including logins, credentials, SQL Agent jobs and
operators, and server-level triggers

NOTE
Azure SQL Managed Instance guarantees 99.99 percent availability, even in critical scenarios. Overhead caused by some
features in SQL Managed Instance can't be disabled. For more information, seethe Key causes of performance differences
between SQL Managed Instance and SQL Server blog entry.
Choose an appropriate target
You can use the Azure SQL migration extension for Azure Data Studio to get right-sized Azure SQL Managed
Instance recommendation. The extension collects performance data from your source SQL Server instance to
provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost.
To learn more, see Get right-sized Azure recommendation for your on-premises SQL Server database(s)
The following general guidelines can help you choose the right service tier and characteristics of SQL Managed
Instance to help match your performance baseline:
Use the CPU usage baseline to provision a managed instance that matches the number of cores that your
instance of SQL Server uses. It might be necessary to scale resources to match the hardware configuration
characteristics.
Use the memory usage baseline to choose a vCore option that appropriately matches your memory
allocation.
Use the baseline I/O latency of the file subsystem to choose between the General Purpose (latency greater
than 5 ms) and Business Critical (latency less than 3 ms) service tiers.
Use the baseline throughput to preallocate the size of the data and log files to achieve expected I/O
performance.
You can choose compute and storage resources during deployment and then change them afterward by using
the Azure portal, without incurring downtime for your application.

IMPORTANT
Any discrepancy in the virtual network requirements for managed instances can prevent you from creating new instances
or using existing ones. Learn more aboutcreating newandconfiguring existingnetworks.

Another key consideration in the selection of the target service tier in Azure SQL Managed Instance (General
Purpose versus Business Critical) is the availability of certain features, like In-Memory OLTP, that are available
only in the Business Critical tier.
SQL Server VM alternative
Your business might have requirements that make SQL Server on Azure Virtual Machines a more suitable target
than Azure SQL Managed Instance.
If one of the following conditions applies to your business, consider moving to a SQL Server virtual machine
(VM) instead:
You require direct access to the operating system or file system, such as to install third-party or custom
agents on the same virtual machine with SQL Server.
You have strict dependency on features that are still not supported, such as FileStream/FileTable, PolyBase,
and cross-instance transactions.
You need to stay at a specific version of SQL Server (2012, for example).
Your compute requirements are much lower than a managed instance offers (one vCore, for example), and
database consolidation is not an acceptable option.

Migration tools
We recommend the following migration tools:

T EC H N O LO GY DESC RIP T IO N
T EC H N O LO GY DESC RIP T IO N

Azure SQL migration extension for Azure Data Studio The Azure SQL migration extension for Azure Data Studio
provides both the SQL Server assessment and migration
capabilities in Azure Data Studio. It supports migrations in
either online (for migrations that require minimal downtime)
or offline (for migrations where downtime persists through
the duration of the migration) modes.

Azure Migrate This Azure service helps you discover and assess your SQL
data estate at scale on VMware. It provides Azure SQL
deployment recommendations, target sizing, and monthly
estimates.

Azure Database Migration Service This Azure service supports migration in the offline mode for
applications that can afford downtime during the migration
process. Unlike the continuous migration in online mode,
offline mode migration runs a one-time restore of a full
database backup from the source to the target.

Native backup and restore SQL Managed Instance supports restore of native SQL
Server database backups (.bak files). It's the easiest migration
option for customers who can provide full database backups
to Azure Storage.

Log Replay Service This cloud service is enabled for SQL Managed Instance
based on SQL Server log-shipping technology. It's a
migration option for customers who can provide full,
differential, and log database backups to Azure Storage. Log
Replay Service is used to restore backup files from Azure
Blob Storage to SQL Managed Instance.

Managed Instance link This feature enables online migration to Managed Instance
using Always On technology. It’s a migration option for
customers who require database on Managed Instance to be
accessible in R/O mode while migration is in progress, who
need to keep the migration running for prolonged periods of
time (weeks or months at the time), who require true online
replication to Business Critical service tier, and for customers
who require the most performant minimum downtime
migration.

The following table lists alternative migration tools:

T EC H N O LO GY DESC RIP T IO N

Transactional replication Replicate data from source SQL Server database tables to
SQL Managed Instance by providing a publisher-subscriber
type migration option while maintaining transactional
consistency.
T EC H N O LO GY DESC RIP T IO N

Bulk copy The bulk copy program (bcp) tool copies data from an
instance of SQL Server into a data file. Use the tool to export
the data from your source and import the data file into the
target SQL managed instance.

For high-speed bulk copy operations to move data to Azure


SQL Managed Instance, you can use the Smart Bulk Copy
tool to maximize transfer speed by taking advantage of
parallel copy tasks.

Import Export Wizard/BACPAC BACPAC is a Windows file with a .bacpac extension that
encapsulates a database's schema and data. You can use
BACPAC to both export data from a SQL Server source and
import the data back into Azure SQL Managed Instance.

Azure Data Factory The Copy activity in Azure Data Factory migrates data from
source SQL Server databases to SQL Managed Instance by
using built-in connectors and an integration runtime.

Data Factory supports a wide range of connectors to move


data from SQL Server sources to SQL Managed Instance.

Compare migration options


Compare migration options to choose the path that's appropriate to your business needs.
The following table compares the recommended migration options:

M IGRAT IO N O P T IO N W H EN TO USE C O N SIDERAT IO N S

Azure SQL migration extension for - Migrate single databases or multiple - Easy to setup and get started.
Azure Data Studio databases at scale. - Requires setup of self-hosted
- Can run in both online (minimal integration runtime to access on-
downtime) and offline (acceptable premises SQL Server and backups.
downtime) modes. - Includes both assessment and
migration capabilities.
Supported sources:
- SQL Server (2005 to 2019) on-
premises or Azure VM
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM

Azure Database Migration Service - Migrate single databases or multiple - Migrations at scale can be
databases at scale. automated via PowerShell.
- Can run in both online (minimal - Time to complete migration depends
downtime) and offline modes. on database size and is affected by
backup and restore time.
Supported sources: - Sufficient downtime might be
- SQL Server (2005 to 2019) on- required.
premises or Azure VM
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM
M IGRAT IO N O P T IO N W H EN TO USE C O N SIDERAT IO N S

Native backup and restore - Migrate individual line-of-business - Database backup uses multiple
application databases. threads to optimize data transfer to
- Quick and easy migration without a Azure Blob Storage, but partner
separate migration service or tool. bandwidth and database size can affect
transfer rate.
Supported sources: - Downtime should accommodate the
- SQL Server (2005 to 2019) on- time required to perform a full backup
premises or Azure VM and restore (which is a size of data
- AWS EC2 operation).
- AWS RDS
- GCP Compute SQL Server VM

Log Replay Service - Migrate individual line-of-business - The migration entails making full
application databases. database backups on SQL Server and
- More control is needed for database copying backup files to Azure Blob
migrations. Storage. Log Replay Service is used to
restore backup files from Azure Blob
Supported sources: Storage to SQL Managed Instance.
- SQL Server (2008 to 2019) on- - Databases being restored during the
premises or Azure VM migration process will be in a restoring
- AWS EC2 mode and can't be used for read or
- AWS RDS write workloads until the process is
- GCP Compute SQL Server VM complete.

Link feature for Azure SQL Managed - Migrate individual line-of-business - The migration entails establishing a
Instance application databases. network connection between SQL
- More control is needed for database Server and SQL Managed Instance,
migrations. and opening communication ports.
- Minimum downtime migration is - Uses Always On availability group
needed. technology to replicate database near
real-time, making an exact replica of
Supported sources: the SQL Server database on SQL
- SQL Server (2016 to 2019) on- Managed Instance.
premises or Azure VM - The database can be used for read-
- AWS EC2 only access on SQL Managed Instance
- GCP Compute SQL Server VM while migration is in progress.
- Provides the best performance
during migration with minimum
downtime.

The following table compares the alternative migration options:

M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S

Transactional replication - Migrate by continuously publishing


changes from source database tables - Setup is relatively complex compared
to target SQL Managed Instance to other migration options.
database tables. - Provides a continuous replication
- Do full or partial database migrations option to migrate data (without taking
of selected tables (subset of a the databases offline).
database). - Transactional replication has
limitations to consider when you're
Supported sources: setting up the publisher on the source
- SQL Server (2012 to 2019) with SQL Server instance. See Limitations
some limitations on publishing objects to learn more.
- AWS EC2 - Capability to monitor replication
- GCP Compute SQL Server VM activity is available.
M ET H O D O R T EC H N O LO GY W H EN TO USE C O N SIDERAT IO N S

Bulk copy - Do full or partial data migrations. - Requires downtime for exporting
- Can accommodate downtime. data from the source and importing
into the target.
Supported sources: - The file formats and data types used
- SQL Server (2005 to 2019) on- in the export or import need to be
premises or Azure VM consistent with table schemas.
- AWS EC2
- AWS RDS
- GCP Compute SQL Server VM

Import Export Wizard/BACPAC - Migrate individual line-of-business


application databases. - Requires downtime because data
- Suited for smaller databases. needs to be exported at the source
Doesn't require a separate migration and imported at the destination.
service or tool. - The file formats and data types used
in the export or import need to be
Supported sources: consistent with table schemas to avoid
- SQL Server (2005 to 2019) on- truncation or data-type mismatch
premises or Azure VM errors.
- AWS EC2 - Time taken to export a database with
- AWS RDS a large number of objects can be
- GCP Compute SQL Server VM significantly higher.

Azure Data Factory - Migrate and/or transform data from


source SQL Server databases.
- Merging data from multiple sources
of data to Azure SQL Managed
Instance is typically for business
intelligence (BI) workloads.
- Requires creating data movement
pipelines in Data Factory to move data
from source to destination.
- Cost is an important consideration
and is based on factors like pipeline
triggers, activity runs, and duration of
data movement.

Feature interoperability
There are more considerations when you're migrating workloads that rely on other SQL Server features.
SQL Server Integration Services
Migrate SQL Server Integration Services (SSIS) packages and projects in SSISDB to Azure SQL Managed
Instance by using Azure Database Migration Service.
Only SSIS packages in SSISDB starting with SQL Server 2012 are supported for migration. Convert older SSIS
packages before migration. See the project conversion tutorial to learn more.
SQL Server Reporting Services
You can migrate SQL Server Reporting Services (SSRS) reports to paginated reports in Power BI. Use theRDL
Migration Tool to help prepare and migrate your reports. Microsoft developed this tool to help customers
migrate Report Definition Language (RDL) reports from their SSRS servers to Power BI. It's available on GitHub,
and it documents an end-to-end walkthrough of the migration scenario.
SQL Server Analysis Services
SQL Server Analysis Services tabular models from SQL Server 2012 and later can be migrated to Azure Analysis
Services, which is a platform as a service (PaaS) deployment model for the Analysis Services tabular model in
Azure. You can learn more about migrating on-premises models to Azure Analysis Services in this video tutorial.
Alternatively, you can consider migrating your on-premises Analysis Services tabular models to Power BI
Premium by using the new XMLA read/write endpoints.
High availability
The SQL Server high-availability features Always On failover cluster instances and Always On availability groups
become obsolete on the target SQL managed instance. High-availability architecture is already built into both
General Purpose (standard availability model) and Business Critical (premium availability model) service tiers
for SQL Managed Instance. The premium availability model also provides read scale-out that allows connecting
into one of the secondary nodes for read-only purposes.
Beyond the high-availability architecture that's included in SQL Managed Instance, the auto-failover groups
feature allows you to managethe replication and failover of databases in a managed instance to another region.
SQL Agent jobs
Use the offline Azure Database Migration Service option to migrate SQL Agent jobs. Otherwise, script the jobs in
Transact-SQL (T-SQL) by using SQL Server Management Studio and then manually re-create them on the target
SQL managed instance.

IMPORTANT
Currently, Azure Database Migration Service supports only jobs with T-SQL subsystem steps. Jobs with SSIS package
steps have to be manually migrated.

Logins and groups


Move SQL logins from the SQL Server source to Azure SQL Managed Instance by using Database Migration
Service in offline mode. Use the Select logins pane in the Migration Wizard to migrate logins to your target SQL
managed instance.
By default, Azure Database Migration Service supports migrating only SQL logins. However, you can enable the
migration of Windows logins by:
Ensuring that the target SQL managed instance has Azure Active Directory (Azure AD) read access. A user
who has the Global Administrator role can configure that access via the Azure portal.
Configuring Azure Database Migration Service to enable Windows user or group login migrations. You set
this up via the Azure portal, on the Configuration page. After you enable this setting, restart the service for
the changes to take effect.
After you restart the service, Windows user or group logins appear in the list of logins available for migration.
For any Windows user or group logins that you migrate, you're prompted to provide the associated domain
name. Service user accounts (accounts with the domain name NT AUTHORITY) and virtual user accounts
(accounts with the domain name NT SERVICE) aren't supported. To learn more, see How to migrate Windows
users and groups in a SQL Server instance to Azure SQL Managed Instance using T-SQL.
Alternatively, you can use the PowerShell utility specially designed by Microsoft data migration architects. The
utility uses PowerShell to create a T-SQL script to re-create logins and select database users from the source to
the target.
The PowerShell utility automatically maps Windows Server Active Directory accounts to Azure AD accounts, and
it can do a UPN lookup for each login against the source Active Directory instance. The utility scripts custom
server and database roles, along with role membership and user permissions. Contained databases aren't yet
supported, and only a subset of possible SQL Server permissions is scripted.
Encryption
When you're migrating databases protected byTransparent Data Encryptionto a managed instance by using the
native restore option, migrate the corresponding certificate from the source SQL Server instance to the target
SQL managed instance before database restore.
System databases
Restore of system databases isn't supported. To migrate instance-level objects (stored in the master and msdb
databases), script them by using T-SQL and then re-create them on the target managed instance.
In-Memory OLTP (memory-optimized tables)
SQL Server provides an In-Memory OLTP capability. It allows usage of memory-optimized tables, memory-
optimized table types, and natively compiled SQL modules to run workloads that have high-throughput and
low-latency requirements for transactional processing.

IMPORTANT
In-Memory OLTP is supported only in the Business Critical tier in Azure SQL Managed Instance. It's not supported in the
General Purpose tier.

If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server
instance and you want to migrate to Azure SQL Managed Instance, you should either:
Choose the Business Critical tier for your target SQL managed instance that supports In-Memory OLTP.
If you want to migrate to the General Purpose tier in Azure SQL Managed Instance, remove memory-
optimized tables, memory-optimized table types, and natively compiled SQL modules that interact with
memory-optimized objects before migrating your databases. You can use the following T-SQL query to
identify all objects that need to be removed before migration to the General Purpose tier:

SELECT * FROM sys.tables WHERE is_memory_optimized=1


SELECT * FROM sys.table_types WHERE is_memory_optimized=1
SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1

To learn more about in-memory technologies, see Optimize performance by using in-memory technologies in
Azure SQL Database and Azure SQL Managed Instance.

Advanced features
Be sure to take advantage of the advanced cloud-based features in SQL Managed Instance. For example, you
don't need to worry about managing backups because the service does it for you. You can restore to any point
in time within the retention period. Additionally, you don't need to worry about setting up high availability,
becausehigh availabilityis built in.
To strengthen security, consider usingAzure AD authentication, auditing,threat detection,row-level security,
anddynamic data masking.
In addition to advanced management and security features, SQL Managed Instance provides advanced tools that
can help you monitor and tune your workload. Azure SQL Analytics allows you to monitor a large set of
managed instances in a centralized way.Automatic tuningin managed instances continuously monitors
performance of your SQL plan execution and automatically fixes the identified performance problems.
Some features are available only after the database compatibility level is changed to the latest compatibility
level (150).

Migration assets
For more assistance, see the following resources that were developed for real-world migration projects.

A SSET DESC RIP T IO N

Data workload assessment model and tool This tool provides suggested "best fit" target platforms,
cloud readiness, and an application/database remediation
level for a workload. It offers simple, one-click calculation and
report generation that helps to accelerate large estate
assessments by providing an automated and uniform
decision process for target platforms.

Utility to move on-premises SQL Server logins to Azure SQL A PowerShell script can create a T-SQL command script to
Managed Instance re-create logins and select database users from on-premises
SQL Server to Azure SQL Managed Instance. The tool allows
automatic mapping of Windows Server Active Directory
accounts to Azure AD accounts, along with optionally
migrating SQL Server native logins.

Perfmon data collection automation by using Logman You can use the Logman tool to collect Perfmon data (to
help you understand baseline performance) and get
migration target recommendations. This tool uses
logman.exe to create the command that will create, start,
stop, and delete performance counters set on a remote SQL
Server instance.

The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate
complex modernization for data platform migration projects to Microsoft's Azure data platform.

Next steps
To start migrating your SQL Server databases to Azure SQL Managed Instance, see the SQL Server to
Azure SQL Managed Instance migration guide.
For a matrix of services and tools that can help you with database and data migration scenarios as well as
specialty tasks, see Services and tools for data migration.
To learn more about Azure SQL Managed Instance, see:
Service tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure Total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for cloud migrations, see:
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrated to Azure
To assess the application access layer, see Data Access Migration Toolkit (Preview).
For details on how to perform A/B testing at the data access layer, see Database Experimentation
Assistant.
Migration guide: SQL Server to Azure SQL
Managed Instance
7/12/2022 • 15 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This guide helps you migrate your SQL Server instance to Azure SQL Managed Instance.
You can migrate SQL Server running on-premises or on:
SQL Server on Virtual Machines
Amazon Web Services (AWS) EC2
Amazon Relational Database Service (AWS RDS)
Compute Engine (Google Cloud Platform - GCP)
Cloud SQL for SQL Server (Google Cloud Platform – GCP)
For more migration information, see the migration overview. For other migration guides, see Database
Migration.

Prerequisites
To migrate your SQL Server to Azure SQL Managed Instance, make sure you have:
Chosen a migration method and the corresponding tools for your method.
Install the Azure SQL migration extension for Azure Data Studio.
Installed the Data Migration Assistant (DMA) on a machine that can connect to your source SQL Server.
Created a target Azure SQL Managed Instance
Configured connectivity and proper permissions to access both source and target.
Reviewed the SQL Server database engine features available in Azure SQL Managed Instance.

Pre-migration
After you've verified that your source environment is supported, start with the pre-migration stage. Discover all
of the existing data sources, assess migration feasibility, and identify any blocking issues that might prevent
your migration.
Discover
In the Discover phase, scan the network to identify all SQL Server instances and features used by your
organization.
Use Azure Migrate to assess migration suitability of on-premises servers, perform performance-based sizing,
and provide cost estimations for running them in Azure.
Alternatively, use theMicrosoft Assessment and Planning Toolkit(the "MAP Toolkit") to assess your current IT
infrastructure. The toolkit provides a powerful inventory, assessment, and reporting tool to simplify the
migration planning process.
For more information about tools available to use for the Discover phase, see Services and tools available for
data migration scenarios.
After data sources have been discovered, assess any on-premises SQL Server instance(s) that can be migrated
to Azure SQL Managed Instance to identify migration blockers or compatibility issues. Proceed to the following
steps to assess and migrate databases to Azure SQL Managed Instance:

Assess SQL Managed Instance compatibility where you should ensure that there are no blocking issues that
can prevent your migrations. This step also includes creation of a performance baseline to determine
resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly
sized managed instance and verify that performance after migration isn't affected.
Choose app connectivity options.
Deploy to an optimally sized managed instance where you'll choose technical characteristics (number of
vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed
instance.
Select migration method and migrate where you migrate your databases using offline migration or online
migration options.
Monitor and remediate applications to ensure that you have expected performance.
Assess

NOTE
If you are assessing the entire SQL Server data estate at scale on VMWare, use Azure Migrate to get Azure SQL
deployment recommendations, target sizing, and monthly estimates.

Determine whether SQL Managed Instance is compatible with the database requirements of your application.
SQL Managed Instance is designed to provide easy lift and shift migration for most existing applications that use
SQL Server. However, you may sometimes require features or capabilities that aren't yet supported and the cost
of implementing a workaround is too high.
The Azure SQL migration extension for Azure Data Studio provides a seamless wizard based experience to
assess, get Azure recommendations and migrate your SQL Server databases on-premises to SQL Server on
Azure Virtual Machines. Besides, highlighting any migration blockers or warnings, the extension also includes an
option for Azure recommendations to collect your databases' performance data to recommend a right-sized
Azure SQL Managed Instance to meet the performance needs of your workload (with the least price).
You can also use the Data Migration Assistant (version 4.1 and later) to assess databases to get:
Azure target recommendations
Azure SKU recommendations
To assess your environment using the Database Migration Assessment, follow these steps:
1. Open the Data Migration Assistant (DMA).
2. Select File and then choose New assessment .
3. Specify a project name, selectSQL Serveras the source server type, and then selectAzure SQL Managed
Instance as the target server type.
4. Select the type(s) of assessment reports that you want to generate. For example, database compatibility and
feature parity. Based on the type of assessment, the permissions required on the source SQL Server can be
different. DMA will highlight the permissions required for the chosen advisor before running the assessment.
The feature parity category provides a comprehensive set of recommendations, alternatives
available in Azure, and mitigating steps to help you plan your migration project. (sysadmin
permissions required)
The compatibility issues category identifies partially supported or unsupported feature
compatibility issues that might block migration, and recommendations to address them ( CONNECT SQL ,
VIEW SERVER STATE , and VIEW ANY DEFINITION permissions required).
5. Specify the source connection details for your SQL Server and connect to the source database.
6. Select Star t assessment .
7. When the process is complete, select and review the assessment reports for migration blocking and feature
parity issues. The assessment report can also be exported to a file that can be shared with other teams or
personnel in your organization.
8. Determine the database compatibility level that minimizes post-migration efforts.
9. Identify the best Azure SQL Managed Instance SKU for your on-premises workload.
To learn more, see Perform a SQL Server migration assessment with Data Migration Assistant.
If SQL Managed Instance isn't a suitable target for your workload, SQL Server on Azure VMs might be a viable
alternative target for your business.
Scaled assessments and analysis
If you have multiple servers or databases that require Azure readiness assessment, you can automate the
process by using scripts using one of the following options. To learn more about using scripting see Migrate
databases at scale using automation.
Az.DataMigration PowerShell module
az datamigration CLI extension
Data Migration Assistant command-line interface
Data Migration Assistant also supports consolidation of the assessment reports for analysis. If you have multiple
servers and databases that need to be assessed and analyzed at scale to provide a wider view of the data estate,
see the following links to learn more.
Performing scaled assessments using PowerShell
Analyzing assessment reports using Power BI

IMPORTANT
Running assessments at scale for multiple databases can also be automated using DMA's Command Line Utility which
also allows the results to be uploaded to Azure Migrate for further analysis and target readiness.

Deploy to an optimally sized managed instance


You can use the Azure SQL migration extension for Azure Data Studio to get right-sized Azure SQL Managed
Instance recommendation. The extension collects performance data from your source SQL Server instance to
provide right-sized Azure recommendation that meets your workload's performance needs with minimal cost.
To learn more, see Get right-sized Azure recommendation for your on-premises SQL Server database(s)
Based on the information in the discover and assess phase, create an appropriately sized target SQL Managed
Instance. You can do so by using the Azure portal, PowerShell, or an Azure Resource Manager (ARM) Template.
SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It
introduces a purchasing model that provides greater flexibility in selecting the right level of resources for your
workloads. In the on-premises world, you're probably accustomed to sizing these workloads by using physical
cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores,"
with additional storage and IO available separately. The vCore model is a simpler way to understand your
compute requirements in the cloud versus what you use on-premises today. This purchasing model enables you
to right-size your destination environment in the cloud. Some general guidelines that might help you to choose
the right service tier and characteristics are described here:
Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores
that you're using on SQL Server, having in mind that CPU characteristics might need to be scaled to match
VM characteristics where the managed instance is installed.
Based on the baseline memory usage, choose the service tier that has matching memory. The amount of
memory can't be directly chosen, so you would need to select the managed instance with the amount of
vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).
Based on the baseline IO latency of the file subsystem, choose between the General Purpose (latency greater
than 5 ms) and Business Critical (latency less than 3 ms) service tiers.
Based on baseline throughput, pre-allocate the size of data or log files to get expected IO performance.
You can choose compute and storage resources at deployment time and then change it afterward without
introducing downtime for your application using the Azure portal:

To learn how to create the VNet infrastructure and a managed instance, see Create a managed instance.

IMPORTANT
It is important to keep your destination VNet and subnet in accordance with managed instance VNet requirements. Any
incompatibility can prevent you from creating new instances or using those that you already created. Learn more about
creating new and configuring existing networks.

Migrate
After you have completed tasks associated with thePre-migrationstage, you're ready to perform the schema and
data migration.
Migrate your data using your chosen migration method.
SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or Azure
VM database implementations. They are the optimal choice when you need to lift and shift the back end of the
applications that regularly use instance level and/or cross-database functionalities. If this is your scenario, you
can move an entire instance to a corresponding environment in Azure without the need to rearchitect your
applications.
To move SQL instances, you need to plan carefully:
The migration of all databases that need to be collocated (ones running on the same instance).
The migration of instance-level objects that your application depends on, including logins, credentials, SQL
Agent jobs and operators, and server-level triggers.
SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA activities to
the platform as they're built in. Therefore, some instance-level data doesn't need to be migrated, such as
maintenance jobs for regular backups or Always On configuration, as high availability is built in.
This article covers two of the recommended migration options:
Azure SQL migration extension for Azure Data Studio - migration with near-zero downtime.
Native RESTORE DATABASE FROM URL - uses native backups from SQL Server and requires some downtime.

This guide describes the two most popular options - Azure Database Migration Service (DMS) and native
backup and restore.
For other migration tools, see Compare migration options.
Migrate using the Azure SQL migration extension for Azure Data Studio (minimal downtime )
To perform a minimal downtime migration using Azure Data Studio, follow the high level steps below. For a
detailed step-by-step tutorial, see Migrate SQL Server to an Azure SQL Managed Instance online using Azure
Data Studio:
1. Download and install Azure Data Studio and the Azure SQL migration extension.
2. Launch the Migrate to Azure SQL wizard in the extension in Azure Data Studio.
3. Select databases for assessment and view migration readiness or issues (if any). Additionally, collect
performance data and get right-sized Azure recommendation.
4. Select your Azure account and your target Azure SQL Managed Instance from your subscription.
5. Select the location of your database backups. Your database backups can either be located on an on-premises
network share or in an Azure storage blob container.
6. Create a new Azure Database Migration Service using the wizard in Azure Data Studio. If you've previously
created an Azure Database Migration Service using Azure Data Studio, you can reuse the same if desired.
7. Optional: If your backups are on an on-premises network share, download and install self-hosted integration
runtime on a machine that can connect to the source SQL Server, and the location containing the backup
files.
8. Start the database migration and monitor the progress in Azure Data Studio. You can also monitor the
progress under the Azure Database Migration Service resource in Azure portal.
9. Complete the cutover.
a. Stop all incoming transactions to the source database.
b. Make application configuration changes to point to the target database in Azure SQL Managed
Instance.
c. Take any tail log backups for the source database in the backup location specified.
d. Ensure all database backups have the status Restored in the monitoring details page.
e. Select Complete cutover in the monitoring details page.
Backup and restore
One of the key capabilities of Azure SQL Managed Instance to enable quick and easy database migration is the
native restore of database backup ( .bak ) files stored on Azure Storage. Backing up and restoring are
asynchronous operations based on the size of your database.
The following diagram provides a high-level overview of the process:

NOTE
The time to take the backup, upload it to Azure storage, and perform a native restore operation to Azure SQL Managed
Instance is based on the size of the database. Factor a sufficient downtime to accommodate the operation for large
databases.

The following table provides more information regarding the methods you can use depending on source SQL
Server version you're running:

ST EP SQ L EN GIN E A N D VERSIO N B A C K UP / RESTO RE M ET H O D

Put backup to Azure Storage Prior to 2012 SP1 CU2 Upload .bak file directly to Azure
Storage

2012 SP1 CU2 - 2016 Direct backup using deprecated WITH


CREDENTIAL syntax

2016 and above Direct backup using WITH SAS


CREDENTIAL

Restore from Azure Storage to a RESTORE FROM URL with SAS


managed instance CREDENTIAL

IMPORTANT
When you're migrating a database protected by Transparent Data Encryption to a managed instance using native
restore option, the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated
before database restore. For detailed steps, see Migrate a TDE cert to a managed instance.
Restore of system databases is not supported. To migrate instance-level objects (stored in master or msdb
databases), we recommend to script them out and run T-SQL scripts on the destination instance.

To migrate using backup and restore, follow these steps:


1. Back up your database to Azure blob storage. For example, use backup to url in SQL Server Management
Studio. Use the Microsoft Azure Tool to support databases earlier than SQL Server 2012 SP1 CU2.
2. Connect to your Azure SQL Managed Instance using SQL Server Management Studio.
3. Create a credential using a Shared Access Signature to access your Azure Blob storage account with your
database backups. For example:

CREATE CREDENTIAL [https://mitutorials.blob.core.windows.net/databases]


WITH IDENTITY = 'SHARED ACCESS SIGNATURE'
, SECRET = 'sv=2017-11-09&ss=bfqt&srt=sco&sp=rwdlacup&se=2028-09-06T02:52:55Z&st=2018-09-
04T18:52:55Z&spr=https&sig=WOTiM%2FS4GVF%2FEEs9DGQR9Im0W%2BwndxW2CQ7%2B5fHd7Is%3D'

4. Restore the backup from the Azure storage blob container. For example:

RESTORE DATABASE [TargetDatabaseName] FROM URL =


'https://mitutorials.blob.core.windows.net/databases/WideWorldImporters-Standard.bak'

5. Once restore completes, view the database in Object Explorer within SQL Server Management Studio.
To learn more about this migration option, see Restore a database to Azure SQL Managed Instance with SSMS.

NOTE
A database restore operation is asynchronous and can be retried. You might get an error in SQL Server Management
Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the
background, and you can track the progress of the restore using the sys.dm_exec_requests and sys.dm_operation_status
views.

Data sync and cutover


When using migration options that continuously replicate / sync data changes from source to the target, the
source data and schema can change and drift from the target. During data sync, ensure that all changes on the
source are captured and applied to the target during the migration process.
After you verify that data is the same on both source and target, you can cut over from the source to the target
environment. It's important to plan the cutover process with business / application teams to ensure minimal
interruption during cutover doesn't affect business continuity.

IMPORTANT
For details on the specific steps associated with performing a cutover as part of migrations using DMS, see Performing
migration cutover.

Post-migration
After you've successfully completed themigrationstage, go through a series of post-migration tasks to ensure
that everything is functioning smoothly and efficiently.
The post-migration phase is crucial for reconciling any data accuracy issues and verifying completeness, and
addressing performance issues with the workload.
Monitor and remediate applications
Once you've completed the migration to a managed instance, you should track the application behavior and
performance of your workload. This process includes the following activities:
Compare performance of the workload running on the managed instance with the performance baseline that
you created on the source SQL Server instance.
Continuously monitor performance of your workload to identify potential issues and improvement.
Perform tests
The test approach for database migration consists of the following activities:
1. Develop validation tests : To test database migration, you need to use SQL queries. You must create the
validation queries to run against both the source and the target databases. Your validation queries should
cover the scope you've defined.
2. Set up test environment : The test environment should contain a copy of the source database and the
target database. Be sure to isolate the test environment.
3. Run validation tests : Run the validation tests against the source and the target, and then analyze the
results.
4. Run performance tests : Run performance test against the source and the target, and then analyze and
compare the results.

Use advanced features


You can take advantage of the advanced cloud-based features offered by SQL Managed Instance, such as built-in
high availability, threat detection, and monitoring and tuning your workload.
Azure SQL Analytics allows you to monitor a large set of managed instances in a centralized manner.
Some SQL Server features are only available once the database compatibility level is changed to the latest
compatibility level (150).

Next steps
See Service and tools for data migration for a matrix of the Microsoft and third-party services and tools
that are available to assist you with various database and data migration scenarios as well as specialty
tasks.
To learn more about Azure SQL Managed Instance see:
Service Tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Migration performance: SQL Server to Azure SQL
Managed Instance performance baseline
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Create a performance baseline to compare the performance of your workload on a SQL Managed Instance with
your original workload running on SQL Server.

Create a baseline
Ideally, performance is similar or better after migration, so it is important to measure and record baseline
performance values on the source and then compare them to the target environment. A performance baseline is
a set of parameters that define your average workload on your source.
Select a set of queries that are important to, and representative of your business workload. Measure and
document the min/average/max duration and CPU usage for these queries, as well as performance metrics on
the source server, such as average/max CPU usage, average/max disk IO latency, throughput, IOPS, average /
max page life expectancy, and average max size of tempdb.
The following resources can help define a performance baseline:
Monitor CPU usage
Monitor memory usageand determine the amount of memory used by different components such as buffer
pool, plan cache, column-store pool,In-Memory OLTP, etc. In addition, you should find average and peak
values of the Page Life Expectancy memory performance counter.
Monitor disk IO usage on the source SQL Server instance using thesys.dm_io_virtual_file_statsview
orperformance counters.
Monitor workload and query performance by examining Dynamic Management Views (or Query Store if you
are migrating from SQL Server 2016 and later). Identify average duration and CPU usage of the most
important queries in your workload.
Any performance issues on the source SQL Server should be addressed prior to migration. Migrating known
issues to any new system might cause unexpected results and invalidate any performance comparison.

Compare performance
After you have defined a baseline, compare similar workload performance on the target SQL Managed Instance.
For accuracy, it is important that the SQL Managed Instance environment is comparable to the SQL Server
environment as much as possible.
There are SQL Managed Instance infrastructure differences that make matching performance exactly unlikely.
Some queries may run faster than expected, while others may be slower. The goal of this comparison is to verify
that workload performance in the managed instance matches the performance on SQL Server (on average) and
to identify any critical queries with performance that don’t match your original performance.
Performance comparison is likely to result in the following outcomes:
Workload performance on the managed instance is aligned or better than the workload performance on
your source SQL Server. In this case, you have successfully confirmed that migration is successful.
The majority of performance parameters and queries in the workload perform as expected, with some
exceptions resulting in degraded performance. In this case, identify the differences and their importance.
If there are some important queries with degraded performance, investigate whether the underlying SQL
plans have changed or whether queries are hitting resource limits. You can mitigate this by applying
some hints on critical queries (for example, change compatibility level, legacy cardinality estimator) either
directly or using plan guides. Ensure statistics and indexes are up to date and equivalent in both
environments.
Most queries are slower on a managed instance compared to your source SQL Server instance. In this
case, try to identify the root causes of the difference such asreaching some resource limit such as IO,
memory, or instance log rate limits. If there are no resource limits causing the difference, try changing the
compatibility level of the database or change database settings like legacy cardinality estimation and
rerun the test. Review the recommendations provided by the managed instance or Query Store views to
identify the queries with regressed performance.
SQL Managed Instance has a built-in automatic plan correction feature that is enabled by default. This feature
ensures that queries that worked fine in the past do not degrade in the future. If this feature is not enabled, run
the workload with the old settings so SQL Managed Instance can learn the performance baseline. Then, enable
the feature and run the workload again with the new settings.
Make changes in the parameters of your test or upgrade to higher service tiers to reach the optimal
configuration for the workload performance that fits your needs.

Monitor performance
SQL Managed Instance provides advanced tools for monitoring and troubleshooting, and you should use them
to monitor performance on your instance. Some of the key metrics to monitor are:
CPU usage on the instance to determine if the number of vCores that you provisioned is the right match for
your workload.
Page-life expectancy on your managed instance to determineif you need additional memory.
Statistics likeINSTANCE_LOG_GOVERNORorPAGEIOLATCHthat identify storage IO issues, especially on the
General Purpose tier, where you might need to pre-allocate files to get better IO performance.

Considerations
When comparing performance, consider the following:
Settings match between source and target. Validate that various instance, database, and tempdb settings
are equivalent between the two environments. Differences in configuration, compatibility levels,
encryption settings, trace flags etc., can all skew performance.
Storage is configured according to best practices. For example, for General Purpose, you may need to
pre-allocate the size of the files to improve performance.
There are key environment differences that might cause the performance differences between a managed
instance and SQL Server. Identify risks relevant to your environment that might contribute to a
performance issue.
Query store and automatic tuning should be enabled on your SQL Managed Instance as they help you
measure workload performance and automatically mitigate potential performance issues.

Next steps
For more information to optimize your new Azure SQL Managed Instance environment, see the following
resources:
How to identify why workload performance on Azure SQL Managed Instance is different than SQL Server?
Key causes of performance differences between SQL Managed Instance and SQL Server
Storage performance best practices and considerations for Azure SQL Managed Instance (General Purpose)
Real-time performance monitoring for Azure SQL Managed Instance (this is archived, is this the intended
target?)
Assessment rules for SQL Server to Azure SQL
Managed Instance migration
7/12/2022 • 20 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Migration tools validate your source SQL Server instance by running a number of assessment rules. The rules
identify issues that must be addressed before migrating your SQL Server database to Azure SQL Managed
Instance.
This article provides a list of the rules used to assess the feasibility of migrating your SQL Server database to
Azure SQL Managed Instance.

Rules Summary
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

AnalysisCommandJob Instance Warning AnalysisCommand job step


isn't supported in Azure
SQL Managed Instance.

AnalysisQueryJob Instance Warning AnalysisQuery job step isn't


supported in Azure SQL
Managed Instance.

AssemblyFromFile Database Issue 'CREATE ASSEMBLY' and


'ALTER ASSEMBLY' with a
file parameter are
unsupported in Azure SQL
Managed Instance.

BulkInsert Database Issue BULK INSERT with non-


Azure blob data source isn't
supported in Azure SQL
Managed Instance.

ClrStrictSecurity Database Warning CLR assemblies marked as


SAFE or EXTERNAL_ACCESS
are considered UNSAFE.

ComputeClause Database Warning COMPUTE clause is no


longer supported and has
been removed.

CryptographicProvider Database Issue A use of CREATE


CRYPTOGRAPHIC
PROVIDER or ALTER
CRYPTOGRAPHIC
PROVIDER was found. This
isn't supported in Azure
SQL Managed Instance.
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

DatabasePrincipalAlias Database Issue SYS.DATABASE_PRINCIPAL_


ALIASES is no longer
supported and has been
removed.

DbCompatLevelLowerThan Database Warning Database compatibility level


100 below 100 isn't supported.

DisableDefCNSTCHK Database Issue SET option


DISABLE_DEF_CNST_CHK is
no longer supported and
has been removed.

FastFirstRowHint Database Warning FASTFIRSTROW query hint


is no longer supported and
has been removed.

FileStream Database Issue Filestream and Filetable are


not supported in Azure SQL
Managed Instance.

LinkedServerWithNonSQLPr Database Issue Linked server with non-SQL


ovider Server Provider isn't
supported in Azure SQL
Managed Instance.

MergeJob Instance Warning Merge job step isn't


supported in Azure SQL
Managed Instance.

MIDatabaseSize Database Issue Azure SQL Managed


Instance does not support
database size greater than
8 TB.

MIHeterogeneousMSDTCTra Database Issue BEGIN DISTRIBUTED


nsactSQL TRANSACTION with non-
SQL Server remote server
isn't supported in Azure
SQL Managed Instance.

MIHomogeneousMSDTCTra Database Issue BEGIN DISTRIBUTED


nsactSQL TRANSACTION is supported
across multiple servers for
Azure SQL Managed
Instance.

MIInstanceSize Instance Warning Maximum instance storage


size in Azure SQL Managed
Instance cannot be greater
than 8 TB.

MultipleLogFiles Database Issue Azure SQL Managed


Instance does not support
databases with multiple log
files.
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

NextColumn Database Issue Tables and Columns named


NEXT will lead to an error In
Azure SQL Managed
Instance.

NonANSILeftOuterJoinSynt Database Warning Non-ANSI style left outer


ax join is no longer supported
and has been removed.

NonANSIRightOuterJoinSyn Database Warning Non-ANSI style right outer


tax join is no longer supported
and has been removed.

NumDbExceeds100 Instance Warning Azure SQL Managed


Instance supports a
maximum of 100 databases
per instance.

OpenRowsetWithNonBlobD Database Issue OpenRowSet used in bulk


ataSourceBulk operation with non-Azure
blob storage data source
isn't supported in Azure
SQL Managed Instance.

OpenRowsetWithNonSQLPr Database Issue OpenRowSet with non-SQL


ovider provider isn't supported in
Azure SQL Managed
Instance.

PowerShellJob Instance Warning PowerShell job step isn't


supported in Azure SQL
Managed Instance.

QueueReaderJob Instance Warning Queue Reader job step isn't


supported in Azure SQL
Managed Instance.

RAISERROR Database Warning Legacy style RAISERROR


calls should be replaced
with modern equivalents.

SqlMail Database Warning SQL Mail is no longer


supported.

SystemProcedures110 Database Warning Detected statements that


reference removed system
stored procedures that are
not available in Azure SQL
Managed Instance.

TraceFlags Instance Warning Trace flags not supported in


Azure SQL Managed
Instance were found.
RUL E T IT L E L EVEL C AT EGO RY DETA IL S

TransactSqlJob Instance Warning TSQL job step includes


unsupported commands in
Azure SQL Managed
Instance.

WindowsAuthentication Instance Warning Database users mapped


with Windows
authentication (integrated
security) are not supported
in Azure SQL Managed
Instance.

XpCmdshell Database Issue xp_cmdshell is not


supported in Azure SQL
Managed Instance.

AnalysisCommand job
Title: AnalysisCommand job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that runs an Analysis Services command. AnalysisCommand job step is not supported in Azure
SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Analysis Service Command job step and
evaluate if the job step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance

AnalysisQuery job
Title: AnalysisQuer y job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that runs an Analysis Services query. AnalysisQuery job step is not supported in Azure SQL
Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Analysis Service Query job step and
evaluate if the job step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance

Assembly from file


Title: 'CREATE ASSEMBLY' and 'ALTER ASSEMBLY' with a file parameter are unsuppor ted in Azure
SQL Managed Instance.
Categor y : Issue
Description
Azure SQL Managed Instance does not support 'CREATE ASSEMBLY' or 'ALTER ASSEMBLY' with a file parameter.
A binary parameter is supported. See the Impacted Objects section for the specific object where the file
parameter is used.
Recommendation
Review objects using 'CREATE ASSEMBLY' or 'ALTER ASSEMBLY with a file parameter. If any such objects that are
required, convert the file parameter to a binary parameter. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: CLR differences in Azure SQL Managed Instance

Bulk insert
Title: BULK INSERT with non-Azure blob data source is not suppor ted in Azure SQL Managed
Instance.
Categor y : Issue
Description
Azure SQL Managed Instance cannot access file shares or Windows folders. See the "Impacted Objects" section
for the specific uses of BULK INSERT statements that do not reference an Azure blob. Objects with 'BULK INSERT'
where the source is not Azure blob storage will not work after migrating to Azure SQL Managed Instance.
Recommendation
You will need to convert BULK INSERT statements that use local files or file shares to use files from Azure blob
storage instead, when migrating to Azure SQL Managed Instance.
More information: Bulk Insert and OPENROWSET differences in Azure SQL Managed Instance

CLR Security
Title: CLR assemblies marked as SAFE or EXTERNAL_ACCESS are considered UNSAFE
Categor y : Warning
Description
CLR Strict Security mode is enforced in Azure SQL Managed Instance. This mode is enabled by default and
introduces breaking changes for databases containing user-defined CLR assemblies marked either SAFE or
EXTERNAL_ACCESS.
Recommendation
CLR uses Code Access Security (CAS) in the .NET Framework, which is no longer supported as a security
boundary. Beginning with SQL Server 2017 (14.x) database engine, an sp_configure option called clr strict
security is introduced to enhance the security of CLR assemblies. Clr strict security is enabled by default, and
treats SAFE and EXTERNAL_ACCESS CLR assemblies as if they were marked UNSAFE. When clr strict security is
disabled, a CLR assembly created with PERMISSION_SET = SAFE may be able to access external system
resources, call unmanaged code, and acquire sysadmin privileges. After enabling strict security, any assemblies
that are not signed will fail to load. Also, if a database has SAFE or EXTERNAL_ACCESS assemblies, RESTORE or
ATTACH DATABASE statements can complete, but the assemblies may fail to load. To load the assemblies, you
must either alter or drop and recreate each assembly so that it is signed with a certificate or asymmetric key that
has a corresponding login with the UNSAFE ASSEMBLY permission on the server.
More information: CLR strict security

Compute clause
Title: COMPUTE clause is no longer suppor ted and has been removed.
Categor y : Warning
Description
The COMPUTE clause generates totals that appear as additional summary columns at the end of the result set.
However, this clause is no longer supported in Azure SQL Managed Instance.
Recommendation
The T-SQL module needs to be rewritten using the ROLLUP operator instead. The code below demonstrates
how COMPUTE can be replaced with ROLLUP:

USE AdventureWorks GO;

SELECT SalesOrderID, UnitPrice, UnitPriceDiscount


FROM Sales.SalesOrderDetail
ORDER BY SalesOrderID COMPUTE SUM(UnitPrice), SUM(UnitPriceDiscount)
BY SalesOrderID GO;

SELECT SalesOrderID, UnitPrice, UnitPriceDiscount,SUM(UnitPrice) as UnitPrice ,


SUM(UnitPriceDiscount) as UnitPriceDiscount
FROM Sales.SalesOrderDetail
GROUP BY SalesOrderID, UnitPrice, UnitPriceDiscount WITH ROLLUP;

More information: Discontinued Database Engine Functionality in SQL Server

Cryptographic provider
Title: A use of CREATE CRYPTOGRAPHIC PROVIDER or ALTER CRYPTOGRAPHIC PROVIDER was
found, which is not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
Azure SQL Managed Instance does not support CRYPTOGRAPHIC PROVIDER statements because it cannot
access files. See the Impacted Objects section for the specific uses of CRYPTOGRAPHIC PROVIDER statements.
Objects with 'CREATE CRYPTOGRAPHIC PROVIDER' or 'ALTER CRYPTOGRAPHIC PROVIDER' will not work
correctly after migrating to Azure SQL Managed Instance.
Recommendation
Review objects with 'CREATE CRYPTOGRAPHIC PROVIDER' or 'ALTER CRYPTOGRAPHIC PROVIDER'. In any such
objects that are required, remove the uses of these features. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Cryptographic provider differences in Azure SQL Managed Instance

Database compatibility
Title: Database compatibility level below 100 is not suppor ted
Categor y : Warning
Description
Database Compatibility Level is a valuable tool to assist in database modernization, by allowing the SQL Server
Database Engine to be upgraded, while keeping connecting applications functional status by maintaining the
same pre-upgrade Database Compatibility Level. Azure SQL Managed Instance doesn't support compatibility
levels below 100. When the database with compatibility level below 100 is restored on Azure SQL Managed
Instance, the compatibility level is upgraded to 100.
Recommendation ... Evaluate if the application functionality is intact when the database compatibility level is
upgraded to 100 on Azure SQL Managed Instance. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Supported compatibility levels in Azure SQL Managed Instance

Database principal alias


Title: SYS.DATABASE_PRINCIPAL_ALIASES is no longer suppor ted and has been removed.
Categor y : Issue
Description
SYS.DATABASE_PRINCIPAL_ALIASES is no longer supported and has been removed in Azure SQL Managed
Instance.
Recommendation
Use roles instead of aliases.
More information: Discontinued Database Engine Functionality in SQL Server

DISABLE_DEF_CNST_CHK option
Title: SET option DISABLE_DEF_CNST_CHK is no longer suppor ted and has been removed.
Categor y : Issue
Description
SET option DISABLE_DEF_CNST_CHK is no longer supported and has been removed in Azure SQL Managed
Instance.
More information: Discontinued Database Engine Functionality in SQL Server

FASTFIRSTROW hint
Title: FASTFIRSTROW quer y hint is no longer suppor ted and has been removed.
Categor y : Warning
Description
FASTFIRSTROW query hint is no longer supported and has been removed in Azure SQL Managed Instance.
Recommendation
Instead of FASTFIRSTROW query hint use OPTION (FAST n).
More information: Discontinued Database Engine Functionality in SQL Server

FileStream
Title: Filestream and Filetable are not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
The Filestream feature, which allows you to store unstructured data such as text documents, images, and videos
in NTFS file system, is not supported in Azure SQL Managed Instance. This database can't be migrated as
the backup containing Filestream filegroups can't be restored on Azure SQL Managed Instance.
Recommendation
Upload the unstructured files to Azure Blob storage and store metadata related to these files (name, type, URL
location, storage key etc.) in Azure SQL Managed Instance. You may have to re-engineer your application to
enable streaming blobs to and from Azure SQL Managed Instance. Alternatively, migrate to SQL Server on
Azure Virtual Machine.
More information: Streaming Blobs To and From SQL Azure blog
Heterogeneous MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION with non-SQL Ser ver remote ser ver is not suppor ted in
Azure SQL Managed Instance.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) is not supported in Azure SQL Managed Instance if the remote
server is not SQL Server.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance

Homogenous MS DTC
Title: BEGIN DISTRIBUTED TRANSACTION is suppor ted across multiple ser vers for Azure SQL
Managed Instance.
Categor y : Issue
Description
Distributed transaction started by Transact SQL BEGIN DISTRIBUTED TRANSACTION and managed by Microsoft
Distributed Transaction Coordinator (MS DTC) is supported across multiple servers for Azure SQL Managed
Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using BEGIN DISTRUBUTED TRANSACTION.
Consider migrating the participant databases to Azure SQL Managed Instance where distributed transactions
across multiple instances are supported (Currently in preview). Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Transactions across multiple servers for Azure SQL Managed Instance

Linked server (non-SQL provider)


Title: Linked ser ver with non-SQL Ser ver Provider is not suppor ted in Azure SQL Managed
Instance.
Categor y : Issue
Description
Linked servers enable the SQL Server Database Engine to execute commands against OLE DB data sources
outside of the instance of SQL Server. Linked server with non-SQL Server Provider is not supported in Azure
SQL Managed Instance.
Recommendation
Azure SQL Managed Instance does not support linked server functionality if the remote server provider is non-
SQL Server like Oracle, Sybase etc.
The following actions are recommended to eliminate the need for linked servers:
Identify the dependent database(s) from remote non-SQL servers and consider moving these into the
database being migrated.
Migrate the dependent database(s) to supported targets like SQL Managed Instance, SQL Database, Azure
Synapse SQL and SQL Server instances.
Consider creating linked server between Azure SQL Managed Instance and SQL Server on Azure Virtual
Machine (SQL VM). Then from SQL VM create linked server to Oracle, Sybase etc. This approach does involve
two hops but can be used as temporary workaround.
Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: Linked Server differences in Azure SQL Managed Instance

Merge job
Title: Merge job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that activates the replication Merge Agent. The Replication Merge Agent is a utility executable that
applies the initial snapshot held in the database tables to the Subscribers. It also merges incremental data
changes that occurred at the Publisher after the initial snapshot was created, and reconciles conflicts either
according to the rules you configure or using a custom resolver you create. Merge job step is not supported in
Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Merge job step and evaluate if the job
step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure Virtual Machine
More information: SQL Server Agent differences in Azure SQL Managed Instance

MI database size
Title: Azure SQL Managed Instance does not suppor t database size greater than 8 TB.
Categor y : Issue
Description
The size of the database is greater than maximum instance reserved storage. This database can't be selected
for migration as the size exceeded the allowed limit.
Recommendation
Evaluate if the data can be archived compressed or sharded into multiple databases. Alternatively, migrate to
SQL Server on Azure Virtual Machine.
More information: Hardware characteristics of Azure SQL Managed Instance

MI instance size
Title: Maximum instance storage size in Azure SQL Managed Instance cannot be greater than 8 TB.
Categor y : Warning
Description
The size of all databases is greater than maximum instance reserved storage.
Recommendation
Consider migrating the databases to different Azure SQL Managed Instances or to SQL Server on Azure Virtual
Machine if all the databases must exist on the same instance.
More information: Hardware characteristics of Azure SQL Managed Instance
Multiple log files
Title: Azure SQL Managed Instance does not suppor t multiple log files.
Categor y : Issue
Description
SQL Server allows a database to log to multiple files. This database has multiple log files, which is not supported
in Azure SQL Managed Instance. **This database can't be migrated as the backup can't be restored on Azure
SQL Managed Instance. **
Recommendation
Azure SQL Managed Instance supports only a single log per database. You need to delete all but one of the log
files before migrating this database to Azure:

ALTER DATABASE [database_name] REMOVE FILE [log_file_name]

More information: Unsupported database options in Azure SQL Managed Instance

Next column
Title: Tables and Columns named NEXT will lead to an error In Azure SQL Managed Instance.
Categor y : Issue
Description
Tables or columns named NEXT were detected. Sequences, introduced in Microsoft SQL Server, use the ANSI
standard NEXT VALUE FOR function. Tables or columns named NEXT and column aliased as VALUE with the
ANSI standard AS omitted can cause an error.
Recommendation
Rewrite statements to include the ANSI standard AS keyword when aliasing a table or column. For example,
when a column is named NEXT and that column is aliased as VALUE, the query SELECT NEXT VALUE FROM
TABLE will cause an error and should be rewritten as SELECT NEXT AS VALUE FROM TABLE. Similarly, for a table
named NEXT and aliased as VALUE, the query SELECT Col1 FROM NEXT VALUE will cause an error and should
be rewritten as SELECT Col1 FROM NEXT AS VALUE.

Non-ANSI style left outer join


Title: Non-ANSI style left outer join is no longer suppor ted and has been removed.
Categor y : Warning
Description
Non-ANSI style left outer join is no longer supported and has been removed in Azure SQL Managed Instance.
Recommendation
Use ANSI join syntax.
More information: Discontinued Database Engine Functionality in SQL Server

Non-ANSI style right outer join


Title: Non-ANSI style right outer join is no longer suppor ted and has been removed.
Categor y : Warning
Description
Non-ANSI style right outer join is no longer supported and has been removed in Azure SQL Managed Instance.
More information: Discontinued Database Engine Functionality in SQL Server
Recommendation
Use ANSI join syntax.

Databases exceed 100


Title: Azure SQL Managed Instance suppor ts a maximum of 100 databases per instance.
Categor y : Warning
Description
Maximum number of databases supported in Azure SQL Managed Instance is 100, unless the instance storage
size limit has been reached.
Recommendation
Consider migrating the databases to different Azure SQL Managed Instances or to SQL Server on Azure Virtual
Machine if all the databases must exist on the same instance.
More information: Azure SQL Managed Instance Resource Limits

OPENROWSET (non-blob data source)


Title: OpenRowSet used in bulk operation with non-Azure Blob Storage data source is not
suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
OPENROWSET supports bulk operations through a built-in BULK provider that enables data from a file to be
read and returned as a rowset. OPENROWSET with non-Azure blob storage data source is not supported in
Azure SQL Managed Instance.
Recommendation
Azure SQL Managed Instance cannot access file shares and Windows folders, so the files must be imported from
Azure blob storage. Therefore, only blob type DATASOURCE is supported in OPENROWSET function.
Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: Bulk Insert and OPENROWSET differences in Azure SQL Managed Instance

OPENROWSET (non-SQL provider)


Title: OpenRowSet with non-SQL provider is not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
This method is an alternative to accessing tables in a linked server and is a one-time, ad hoc method of
connecting and accessing remote data by using OLE DB. OpenRowSet with non-SQL provider is not supported
in Azure SQL Managed Instance.
Recommendation
OPENROWSET function can be used to execute queries only on SQL Server instances (either managed, on-
premises, or in Virtual Machines). Only SQLNCLI, SQLNCLI11, and SQLOLEDB values are supported as provider.
Therefore, the recommendation action is that identify the dependent database(s) from remote non-SQL Servers
and consider moving these into the database being migrated.
More information: Bulk Insert and OPENROWSET differences in Azure SQL Managed Instance
PowerShell job
Title: PowerShell job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that runs a PowerShell script. PowerShell job step is not supported in Azure SQL Managed
Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using PowerShell job step and evaluate if the
job step or the impacted object can be removed. Evaluate if Azure Automation can be used. Alternatively,
migrate to SQL Server on Azure Virtual Machine
More information: SQL Server Agent differences in Azure SQL Managed Instance

Queue Reader job


Title: Queue Reader job step is not suppor ted in Azure SQL Managed Instance.
Categor y : Warning
Description
It is a job step that activates the replication Queue Reader Agent. The Replication Queue Reader Agent is an
executable that reads messages stored in a Microsoft SQL Server queue or a Microsoft Message Queue and
then applies those messages to the Publisher. Queue Reader Agent is used with snapshot and transactional
publications that allow queued updating. Queue Reader job step is not supported in Azure SQL Managed
Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs using Queue Reader job step and evaluate if the
job step or the impacted object can be removed. Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance

RAISERROR
Title: Legacy style RAISERROR calls should be replaced with modern equivalents.
Categor y : Warning
Description
RAISERROR calls like the below example are termed as legacy-style because they do not include the commas
and the parenthesis. RAISERROR 50001 'this is a test'. This method of calling RAISERROR is no longer
supported and removed in Azure SQL Managed Instance.
Recommendation
Rewrite the statement using the current RAISERROR syntax, or evaluate if the modern approach of
BEGIN TRY { } END TRY BEGIN CATCH { THROW; } END CATCH is feasible.

More information: Discontinued Database Engine Functionality in SQL Server

SQL Mail
Title: SQL Mail has been no longer suppor ted.
Categor y : Warning
Description
SQL Mail has been no longer supported and removed in Azure SQL Managed Instance.
Recommendation
Use Database Mail.
More information: Discontinued Database Engine Functionality in SQL Server

SystemProcedures110
Title: Detected statements that reference removed system stored procedures that are not available
in Azure SQL Managed Instance.
Categor y : Warning
Description
Following unsupported system and extended stored procedures cannot be used in Azure SQL Managed Instance
- sp_dboption , sp_addserver , sp_dropalias , sp_activedirectory_obj , sp_activedirectory_scp , and
sp_activedirectory_start .

Recommendation
Remove references to unsupported system procedures that have been removed in Azure SQL Managed
Instance.
More information: Discontinued Database Engine Functionality in SQL Server

Transact-SQL job
Title: TSQL job step includes unsuppor ted commands in Azure SQL Managed Instance
Categor y : Warning
Description
It is a job step that runs TSQL scripts at scheduled time. TSQL job step includes unsupported commands which
are not supported in Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all jobs that include unsupported commands in Azure
SQL Managed Instance and evaluate if the job step or the impacted object can be removed. Alternatively,
migrate to SQL Server on Azure Virtual Machine.
More information: SQL Server Agent differences in Azure SQL Managed Instance

Trace flags
Title: Trace flags not suppor ted in Azure SQL Managed Instance were found
Categor y : Warning
Description
Azure SQL Managed Instance supports only limited number of global trace flags. Session trace flags aren't
supported.
Recommendation
Review impacted objects section in Azure Migrate to see all trace flags that are not supported in Azure SQL
Managed Instance and evaluate if they can be removed. Alternatively, migrate to SQL Server on Azure Virtual
Machine.
More information: Trace flags

Windows authentication
Title: Database users mapped with Windows authentication (integrated security) are not
suppor ted in Azure SQL Managed Instance
Categor y : Warning
Description
Azure SQL Managed Instance supports two types of authentication:
SQL Authentication, which uses a username and password
Azure Active Directory Authentication, which uses identities managed by Azure Active Directory and is
supported for managed and integrated domains.
Database users mapped with Windows authentication (integrated security) are not supported in Azure SQL
Managed Instance.
Recommendation
Federate the local Active Directory with Azure Active Directory. The Windows identity can then be replaced with
the equivalent Azure Active Directory identities. Alternatively, migrate to SQL Server on Azure Virtual Machine.
More information: SQL Managed Instance security capabilities

XP_cmdshell
Title: xp_cmdshell is not suppor ted in Azure SQL Managed Instance.
Categor y : Issue
Description
Xp_cmdshell, which spawns a Windows command shell and passes in a string for execution isn't supported in
Azure SQL Managed Instance.
Recommendation
Review impacted objects section in Azure Migrate to see all objects using xp_cmdshell and evaluate if the
reference to xp_cmdshell or the impacted object can be removed. Consider exploring Azure Automation that
delivers cloud-based automation and configuration service. Alternatively, migrate to SQL Server on Azure
Virtual Machine.
More information: Stored Procedure differences in Azure SQL Managed Instance

Next steps
To start migrating your SQL Server to Azure SQL Managed Instance, see the SQL Server to SQL Managed
Instance migration guide.
For a matrix of the Microsoft and third-party services and tools that are available to assist you with
various database and data migration scenarios as well as specialty tasks, see Service and tools for data
migration.
To learn more about Azure SQL Managed Instance, see:
Service Tiers in Azure SQL Managed Instance
Differences between SQL Server and Azure SQL Managed Instance
Azure total Cost of Ownership Calculator
To learn more about the framework and adoption cycle for Cloud migrations, see
Cloud Adoption Framework for Azure
Best practices for costing and sizing workloads migrate to Azure
To assess the Application access layer, see Data Access Migration Toolkit (Preview)
For details on how to perform Data Access Layer A/B testing see Database Experimentation Assistant.
Connectivity architecture for Azure SQL Managed
Instance
7/12/2022 • 12 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains communication in Azure SQL Managed Instance. It also describes connectivity architecture
and how the components direct traffic to a managed instance.
SQL Managed Instance is placed inside the Azure virtual network and the subnet that's dedicated to managed
instances. This deployment provides:
A secure private IP address.
The ability to connect an on-premises network to SQL Managed Instance.
The ability to connect SQL Managed Instance to a linked server or another on-premises data store.
The ability to connect SQL Managed Instance to Azure resources.

Communication overview
The following diagram shows entities that connect to SQL Managed Instance. It also shows the resources that
need to communicate with a managed instance. The communication process at the bottom of the diagram
represents customer applications and tools that connect to SQL Managed Instance as data sources.

SQL Managed Instance is a platform as a service (PaaS) offering. Azure uses automated agents (management,
deployment, and maintenance) to manage this service based on telemetry data streams. Because Azure is
responsible for management, customers can't access the SQL Managed Instance virtual cluster machines
through Remote Desktop Protocol (RDP).
Some operations started by end users or applications might require SQL Managed Instance to interact with the
platform. One case is the creation of a SQL Managed Instance database. This resource is exposed through the
Azure portal, PowerShell, Azure CLI, and the REST API.
SQL Managed Instance depends on Azure services such as Azure Storage for backups, Azure Event Hubs for
telemetry, Azure Active Directory (Azure AD) for authentication, Azure Key Vault for Transparent Data Encryption
(TDE), and a couple of Azure platform services that provide security and supportability features. SQL Managed
Instance makes connections to these services.
All communications are encrypted and signed using certificates. To check the trustworthiness of communicating
parties, SQL Managed Instance constantly verifies these certificates through certificate revocation lists. If the
certificates are revoked, SQL Managed Instance closes the connections to protect the data.

High-level connectivity architecture


At a high level, SQL Managed Instance is a set of service components. These components are hosted on a
dedicated set of isolated virtual machines that run inside the customer's virtual network subnet. These machines
form a virtual cluster.
A virtual cluster can host multiple managed instances. If needed, the cluster automatically expands or contracts
when the customer changes the number of provisioned instances in the subnet.
Customer applications can connect to SQL Managed Instance and can query and update databases inside the
virtual network, peered virtual network, or network connected by VPN or Azure ExpressRoute. This network
must use an endpoint and a private IP address.

Azure management and deployment services run outside the virtual network. SQL Managed Instance and Azure
services connect over the endpoints that have public IP addresses. When SQL Managed Instance creates an
outbound connection, on the receiving end Network Address Translation (NAT) makes the connection look like
it's coming from this public IP address.
Management traffic flows through the customer's virtual network. That means that elements of the virtual
network's infrastructure can harm management traffic by making the instance fail and become unavailable.

IMPORTANT
To improve customer experience and service availability, Azure applies a network intent policy on Azure virtual network
infrastructure elements. The policy can affect how SQL Managed Instance works. This platform mechanism transparently
communicates networking requirements to users. The policy's main goal is to prevent network misconfiguration and to
ensure normal SQL Managed Instance operations. When you delete a managed instance, the network intent policy is also
removed.
Virtual cluster connectivity architecture
Let's take a deeper dive into connectivity architecture for SQL Managed Instance. The following diagram shows
the conceptual layout of the virtual cluster.

Clients connect to SQL Managed Instance by using a host name that has the form
<mi_name>.<dns_zone>.database.windows.net . This host name resolves to a private IP address, although it's
registered in a public Domain Name System (DNS) zone and is publicly resolvable. The zone-id is automatically
generated when you create the cluster. If a newly created cluster hosts a secondary managed instance, it shares
its zone ID with the primary cluster. For more information, see Use auto failover groups to enable transparent
and coordinated failover of multiple databases.
This private IP address belongs to the internal load balancer for SQL Managed Instance. The load balancer
directs traffic to the SQL Managed Instance gateway. Because multiple managed instances can run inside the
same cluster, the gateway uses the SQL Managed Instance host name to redirect traffic to the correct SQL
engine service.
Management and deployment services connect to SQL Managed Instance by using a management endpoint
that maps to an external load balancer. Traffic is routed to the nodes only if it's received on a predefined set of
ports that only the management components of SQL Managed Instance use. A built-in firewall on the nodes is
set up to allow traffic only from Microsoft IP ranges. Certificates mutually authenticate all communication
between management components and the management plane.

Management endpoint
Azure manages SQL Managed Instance by using a management endpoint. This endpoint is inside an instance's
virtual cluster. The management endpoint is protected by a built-in firewall on the network level. On the
application level, it's protected by mutual certificate verification. To find the endpoint's IP address, see Determine
the management endpoint's IP address.
When connections start inside SQL Managed Instance (as with backups and audit logs), traffic appears to start
from the management endpoint's public IP address. You can limit access to public services from SQL Managed
Instance by setting firewall rules to allow only the IP address for SQL Managed Instance. For more information,
see Verify the SQL Managed Instance built-in firewall.
NOTE
Traffic that goes to Azure services that are inside the SQL Managed Instance region is optimized and for that reason not
NATed to the public IP address for the management endpoint. For that reason if you need to use IP-based firewall rules,
most commonly for storage, the service needs to be in a different region from SQL Managed Instance.

Service-aided subnet configuration


To address customer security and manageability requirements, SQL Managed Instance is transitioning from
manual to service-aided subnet configuration.
With service-aided subnet configuration, the customer is in full control of data (TDS) traffic, while SQL Managed
Instance control plane takes responsibility to ensure uninterrupted flow of management traffic in order to fulfill
an SLA.
Service-aided subnet configuration builds on top of the virtual network subnet delegation feature to provide
automatic network configuration management and enable service endpoints.
Service endpoints could be used to configure virtual network firewall rules on storage accounts that keep
backups and audit logs. Even with service endpoints enabled, customers are encouraged to use private link that
provides additional security over service endpoints.

IMPORTANT
Due to control plane configuration specificities, service-aided subnet configuration would not enable service endpoints in
national clouds.

Network requirements
Deploy SQL Managed Instance in a dedicated subnet inside the virtual network. The subnet must have these
characteristics:
Dedicated subnet: SQL Managed Instance's subnet can't contain any other cloud service that's associated
with it, but other managed instances are allowed and it can't be a gateway subnet. The subnet can't contain
any resource but the managed instance(s), and you can't later add other types of resources in the subnet.
Subnet delegation: The SQL Managed Instance subnet needs to be delegated to the
Microsoft.Sql/managedInstances resource provider.
Network security group (NSG): An NSG needs to be associated with the SQL Managed Instance subnet.
You can use an NSG to control access to the SQL Managed Instance data endpoint by filtering traffic on port
1433 and ports 11000-11999 when SQL Managed Instance is configured for redirect connections. The
service will automatically provision and keep current rules required to allow uninterrupted flow of
management traffic.
User defined route (UDR) table: A UDR table needs to be associated with the SQL Managed Instance
subnet. You can add entries to the route table to route traffic that has on-premises private IP ranges as a
destination through the virtual network gateway or virtual network appliance (NVA). Service will
automatically provision and keep current entries required to allow uninterrupted flow of management traffic.
Sufficient IP addresses: The SQL Managed Instance subnet must have at least 32 IP addresses. For more
information, see Determine the size of the subnet for SQL Managed Instance. You can deploy managed
instances in the existing network after you configure it to satisfy the networking requirements for SQL
Managed Instance. Otherwise, create a new network and subnet.
Allowed by Azure policies: If you use Azure Policy to deny the creation or modification of resources in the
scope that includes SQL Managed Instance subnet/virtual network, such policies should not prevent
Managed Instance from managing its internal resources. The following resources need to be excluded from
deny effects to enable normal operation:
Resources of type Microsoft.Network/serviceEndpointPolicies, when resource name begins with
_e41f87a2_
All resources of type Microsoft.Network/networkIntentPolicies
All resources of type Microsoft.Network/virtualNetworks/subnets/contextualServiceEndpointPolicies
Locks on vir tual network : Locks on the dedicated subnet's virtual network, its parent resource group, or
subscription, may occasionally interfere with SQL Managed Instance's management and maintenance
operations. Take special care when you use such locks.

IMPORTANT
When you create a managed instance, a network intent policy is applied on the subnet to prevent noncompliant changes
to networking setup. This policy is a hidden resource located in the virtual network of the resource group. After the last
instance is removed from the subnet, the network intent policy is also removed. Rules below are for the informational
purposes only, and you should not deploy them using ARM template / PowerShell / CLI. If you want to use the latest
official template you could always retrieve it from the portal. Replication traffic for auto-failover groups between two SQL
Managed Instances should be direct, and not through a hub network.

Mandatory inbound security rules with service -aided subnet configuration


These rules are necessary to ensure inbound management traffic flow. See paragraph above for more
information on connectivity architecture and management traffic.

NAME P O RT P ROTO C O L SO URC E DEST IN AT IO N A C T IO N

management 9000, 9003, TCP SqlManagement MI SUBNET Allow


1438, 1440,
1452

9000, 9003 TCP CorpnetSaw MI SUBNET Allow

9000, 9003 TCP CorpnetPublic MI SUBNET Allow

mi_subnet Any Any MI SUBNET MI SUBNET Allow

health_probe Any Any AzureLoadBalanc MI SUBNET Allow


er

Mandatory outbound security rules with service -aided subnet configuration


These rules are necessary to ensure outbound management traffic flow. See paragraph above for more
information on connectivity architecture and management traffic.

NAME P O RT P ROTO C O L SO URC E DEST IN AT IO N A C T IO N

management 443, 12000 TCP MI SUBNET AzureCloud Allow

mi_subnet Any Any MI SUBNET MI SUBNET Allow

Mandatory user defined routes with service -aided subnet configuration


These routes are necessary to ensure that management traffic is routed directly to a destination. See paragraph
above for more information on connectivity architecture and management traffic.
NAME A DDRESS P REF IX N EXT H O P

subnet-to-vnetlocal MI SUBNET Virtual network

mi-azurecloud-REGION-internet AzureCloud.REGION Internet

mi-azurecloud-REGION_PAIR-internet AzureCloud.REGION_PAIR Internet

mi-azuremonitor-internet AzureMonitor Internet

mi-corpnetpublic-internet CorpNetPublic Internet

mi-corpnetsaw-internet CorpNetSaw Internet

mi-eventhub-REGION-internet EventHub.REGION Internet

mi-eventhub-REGION_PAIR-internet EventHub.REGION_PAIR Internet

mi-sqlmanagement-internet SqlManagement Internet

mi-storage-internet Storage Internet

mi-storage-REGION-internet Storage.REGION Internet

mi-storage-REGION_PAIR-internet Storage.REGION_PAIR Internet

mi-azureactivedirectory-internet AzureActiveDirectory Internet

* MI SUBNET refers to the IP address range for the subnet in the form x.x.x.x/y. You can find this information in
the Azure portal, in subnet properties.
** If the destination address is for one of Azure's services, Azure routes the traffic directly to the service over
Azure's backbone network, rather than routing the traffic to the Internet. Traffic between Azure services does not
traverse the Internet, regardless of which Azure region the virtual network exists in, or which Azure region an
instance of the Azure service is deployed in. For more details check UDR documentation page.
In addition, you can add entries to the route table to route traffic that has on-premises private IP ranges as a
destination through the virtual network gateway or virtual network appliance (NVA).
If the virtual network includes a custom DNS, the custom DNS server must be able to resolve public DNS
records. Using additional features like Azure AD Authentication might require resolving additional FQDNs. For
more information, see Set up a custom DNS.
Networking constraints
TLS 1.2 is enforced on outbound connections : In January 2020 Microsoft enforced TLS 1.2 for intra-
service traffic in all Azure services. For Azure SQL Managed Instance, this resulted in TLS 1.2 being enforced on
outbound connections used for replication and linked server connections to SQL Server. If you are using
versions of SQL Server older than 2016 with SQL Managed Instance, please ensure that TLS 1.2 specific updates
have been applied.
The following virtual network features are currently not supported with SQL Managed Instance:
Microsoft peering : Enabling Microsoft peering on ExpressRoute circuits peered directly or transitively with
a virtual network where SQL Managed Instance resides affects traffic flow between SQL Managed Instance
components inside the virtual network and services it depends on, causing availability issues. SQL Managed
Instance deployments to virtual network with Microsoft peering already enabled are expected to fail.
Global vir tual network peering : Virtual network peering connectivity across Azure regions doesn't work
for SQL Managed Instances placed in subnets created before 9/22/2020.
AzurePlatformDNS : Using the AzurePlatformDNS service tag to block platform DNS resolution would
render SQL Managed Instance unavailable. Although SQL Managed Instance supports customer-defined
DNS for DNS resolution inside the engine, there is a dependency on platform DNS for platform operations.
NAT gateway : Using Azure Virtual Network NAT to control outbound connectivity with a specific public IP
address would render SQL Managed Instance unavailable. The SQL Managed Instance service is currently
limited to use of basic load balancer that doesn't provide coexistence of inbound and outbound flows with
Virtual Network NAT.
IPv6 for Azure Vir tual Network : Deploying SQL Managed Instance to dual stack IPv4/IPv6 virtual
networks is expected to fail. Associating network security group (NSG) or route table (UDR) containing IPv6
address prefixes to SQL Managed Instance subnet, or adding IPv6 address prefixes to NSG or UDR that is
already associated with Managed instance subnet, would render SQL Managed Instance unavailable. SQL
Managed Instance deployments to a subnet with NSG and UDR that already have IPv6 prefixes are expected
to fail.
Azure DNS private zones with a name reser ved for Microsoft ser vices : Following is the list of
reserved names: windows.net, database.windows.net, core.windows.net, blob.core.windows.net,
table.core.windows.net, management.core.windows.net, monitoring.core.windows.net,
queue.core.windows.net, graph.windows.net, login.microsoftonline.com, login.windows.net,
servicebus.windows.net, vault.azure.net. Deploying SQL Managed Instance to a virtual network with
associated Azure DNS private zone with a name reserved for Microsoft services would fail. Associating Azure
DNS private zone with reserved name with a virtual network containing Managed Instance, would render
SQL Managed Instance unavailable. Please follow Azure Private Endpoint DNS configuration for the proper
Private Link configuration.

Next steps
For an overview, seeWhat is Azure SQL Managed Instance?.
Learn how to set up a new Azure virtual network or an existing Azure virtual network where you can deploy
SQL Managed Instance.
Calculate the size of the subnet where you want to deploy SQL Managed Instance.
Learn how to create a managed instance:
From the Azure portal.
By using PowerShell.
By using an Azure Resource Manager template.
By using an Azure Resource Manager template (using JumpBox, with SSMS included).
Auto-failover groups overview & best practices
(Azure SQL Managed Instance)
7/12/2022 • 22 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


The auto-failover groups feature allows you to manage the replication and failover of all user databases in a
managed instance to another Azure region. This article focuses on using the Auto-failover group feature with
Azure SQL Managed Instance and some best practices.
To get started, review Configure auto-failover group. For an end-to-end experience, see the Auto-failover group
tutorial.

NOTE
This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see Auto-failover
groups in SQL Database.

Overview
The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a
server or all user databases in a managed instance to another Azure region. It is a declarative abstraction on top
of the active geo-replication feature, designed to simplify deployment and management of geo-replicated
databases at scale.
Automatic failover
You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined
policy. The latter option allows you to automatically recover multiple related databases in a secondary region
after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or
SQL Managed Instance availability in the primary region. Typically, these are outages that cannot be
automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include
natural disasters, or incidents caused by a tenant or control ring being down due to an OS kernel memory leak
on compute nodes. For more information, see Azure SQL high availability.
Offload read-only workloads
To reduce traffic to your primary databases, you can also use the secondary databases in a failover group to
offload read-only workloads. Use the read-only listener to direct read-only traffic to a readable secondary
database.
Endpoint redirection
Auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-
failovers. This means you do not have to change the connection string for your application after a geo-failover,
because connections are automatically routed to the current primary. Whether you use manual or automatic
failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the
geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.
For geo-failover RPO and RTO, see Overview of Business Continuity.
Recovering an application
To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering
an application (service) end-to-end after a catastrophic failure requires recovery of all components that
constitute the service and any dependent services. Examples of these components include the client software
(for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all
components are resilient to the same failures and become available within the recovery time objective (RTO) of
your application. Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the
failover of the services on which it depends.

Terminology and capabilities


Failover group (FOG)
A failover group allows for all user databases within a managed instance to fail over as a unit to another
Azure region in case the primary managed instance becomes unavailable due to a primary region
outage. Since failover groups for SQL Managed Instance contain all user databases within the instance,
only one failover group can be configured on an instance.

IMPORTANT
The name of the failover group must be globally unique within the .database.windows.net domain.

Primar y
The managed instance that hosts the primary databases in the failover group.
Secondar y
The managed instance that hosts the secondary databases in the failover group. The secondary cannot be
in the same Azure region as the primary.
DNS zone
A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-
domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any
instance in the same DNS zone. The two managed instances in the same failover group must share the
DNS zone.
Failover group read-write listener
A DNS CNAME record that points to the current primary. It's created automatically when the failover
group is created and allows the read-write workload to transparently reconnect to the primary when the
primary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS
CNAME record for the listener URL is formed as <fog-name>.<zone_id>.database.windows.net .
Failover group read-only listener
A DNS CNAME record that points to the current secondary. It's created automatically when the failover
group is created and allows the read-only SQL workload to transparently connect to the secondary when
the secondary changes after failover. When the failover group is created on a SQL Managed Instance, the
DNS CNAME record for the listener URL is formed as
<fog-name>.secondary.<zone_id>.database.windows.net .

Automatic failover policy


By default, a failover group is configured with an automatic failover policy. The system triggers a geo-
failover after the failure is detected and the grace period has expired. The system must verify that the
outage cannot be mitigated by the built-in high availability infrastructure, for example due to the scale of
the impact. If you want to control the geo-failover workflow from the application or manually, you can
turn off automatic failover policy.

NOTE
Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the
grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless
of their data synchronization state.

Read-only failover policy


By default, the failover of the read-only listener is disabled. It ensures that the performance of the
primary is not impacted when the secondary is offline. However, it also means the read-only sessions will
not be able to connect until the secondary is recovered. If you cannot tolerate downtime for the read-only
sessions and can use the primary for both read-only and read-write traffic at the expense of the potential
performance degradation of the primary, you can enable failover for the read-only listener by configuring
the AllowReadOnlyFailoverToPrimary property. In that case, the read-only traffic will be automatically
redirected to the primary if the secondary is not available.

NOTE
The AllowReadOnlyFailoverToPrimary property only has effect if automatic failover policy is enabled and an
automatic geo-failover has been triggered. In that case, if the property is set to True, the new primary will serve
both read-write and read-only sessions.

Planned failover
Planned failover performs full data synchronization between primary and secondary databases before
the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the
following scenarios:
Perform disaster recovery (DR) drills in production when data loss is not acceptable
Relocate the databases to a different region
Return the databases to the primary region after the outage has been mitigated (failback)

NOTE
During planned failovers or disaster recovery drills, the primary databases and the target secondary geo-replica
databases should have matching service tiers. If a secondary database has lower memory than the primary
database, you may encounter out-of-memory issues, preventing full recovery after failover. If this happens, the
affected geo-secondary database may be put into a limited read-only mode called checkpoint-only mode . To
avoid this, upgrade the service tier of the secondary database to match the primary database during the planned
failover, or drill. Service tier upgrades can be size-of-data operations, and take a while to finish.

Unplanned failover
Unplanned or forced failover immediately switches the secondary to the primary role without waiting for
recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover
is used as a recovery method during outages when the primary is not accessible. When the outage is
mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover
may be executed to fail back, returning the replicas to their original primary and secondary roles.
Manual failover
You can initiate a geo-failover manually at any time regardless of the automatic failover configuration.
During an outage that impacts the primary, if automatic failover policy is not configured, a manual
failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or
friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can
be used to relocate the primary to the secondary region without data loss. When a failover is completed,
the DNS records are automatically updated to ensure connectivity to the new primary.
Grace period with data loss
Because the data is replicated to the secondary database using asynchronous replication, an automatic
geo-failover may result in data loss. You can customize the automatic failover policy to reflect your
application’s tolerance to data loss. By configuring GracePeriodWithDataLossHours , you can control how
long the system waits before initiating a forced failover, which may result in data loss.

Failover group architecture


The auto-failover group must be configured on the primary instance and will connect it to the secondary
instance in a different Azure region. All user databases in the instance will be replicated to the secondary
instance. System databases like master and msdb will not be replicated.
The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed
instance and auto-failover group:

If your application uses SQL Managed Instance as the data tier, follow the general guidelines and best practices
outlined in this article when designing for business continuity.

Creating the geo-secondary instance


To ensure non-interrupted connectivity to the primary SQL Managed Instance after failover, both the primary
and secondary instances must be in the same DNS zone. It will guarantee that the same multi-domain (SAN)
certificate can be used to authenticate client connections to either of the two instances in the failover group.
When your application is ready for production deployment, create a secondary SQL Managed Instance in a
different region and make sure it shares the DNS zone with the primary SQL Managed Instance. You can do it by
specifying an optional parameter during creation. If you're using PowerShell or the REST API, the name of the
optional parameter is DNSZonePartner . The name of the corresponding optional field in the Azure portal is
Primary Managed Instance.
IMPORTANT
The first managed instance created in the subnet determines DNS zone for all subsequent instances in the same subnet.
This means that two instances from the same subnet cannot belong to different DNS zones.

For more information about creating the secondary SQL Managed Instance in the same DNS zone as the
primary instance, see Create a secondary managed instance.

Use paired regions


Deploy both managed instances to paired regions for performance reasons. SQL Managed Instance failover
groups in paired regions have better performance compared to unpaired regions.

Enable and optimize geo-replication traffic flow between the


instances
Connectivity between the virtual network subnets hosting primary and secondary instance must be established
and maintained for uninterrupted geo-replication traffic flow. There are multiple ways to provide connectivity
between the instances that you can choose among based on your network topology and policies:
Global virtual network peering
VPN gateways
Azure ExpressRoute

IMPORTANT
Global virtual network peering is the recommended way for establishing connectivity between two instances in a failover
group. It provides a low-latency, high-bandwidth private connection between the peered virtual networks using the
Microsoft backbone infrastructure. No public Internet, gateways, or additional encryption is required in the
communication between the peered virtual networks. Global virtual network peering is supported for instances hosted in
subnets created since 9/22/2020. To be able to use global virtual network peering for SQL managed instances hosted in
subnets created before 9/22/2020, consider configuring non-default maintenance window on the instance, as it will move
the instance into a new virtual cluster that supports global virtual network peering.

Regardless of the connectivity mechanism, there are requirements that must be fulfilled for geo-replication
traffic to flow:
The Network Security Group (NSG) rules on the subnet hosting primar y instance allow:
Inbound traffic on port 5022 and port range 11000-11999 from the subnet hosting the secondary
instance.
Outbound traffic on port 5022 and port range 11000-11999 to the subnet hosting the secondary
instance.
The Network Security Group (NSG) rules on the subnet hosting secondar y instance allow:
Inbound traffic on port 5022 and port range 11000-11999 from the subnet hosting the primary
instance.
Outbound traffic on port 5022 and port range 11000-11999 to the subnet hosting the primary
instance.
IP address ranges of VNets hosting primary and secondary instance must not overlap.
There's no indirect overlap of IP address range between the VNets hosting primary and secondary instance
and any other VNets they are peered with via local virtual network peering or other means
Additionally, if you're using other mechanisms for providing connectivity between the instances than the
recommended global virtual network peering, you need to ensure the following:
Any networking device used, like firewalls or network virtual appliances (NVAs), do not block the traffic
described above.
Routing is properly configured, and asymmetric routing is avoided.
If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic
should go directly between the two managed instance subnets rather than directed through the hub
networks. It will help you avoid connectivity and replication speed issues.

IMPORTANT
Alternative ways of providing connectivity between the instances involving additional networking devices may make
troubleshooting process in case of connectivity or replication speed issues very difficult and require active involvement of
network administrators and significantly prolong the resolution time.

Initial seeding
When establishing a failover group between managed instances, there's an initial seeding phase before data
replication starts. The initial seeding phase is the longest and most expensive part of the operation. Once initial
seeding completes data is synchronized, and only subsequent data changes are replicated. The time it takes for
the initial seeding to complete depends on the size of data, number of replicated databases, workload intensity
on the primary databases, and the speed of the link between the virtual networks hosting primary and
secondary instance that mostly depends on the way connectivity is established. Under normal circumstances,
and when connectivity is established using recommended global virtual network peering, seeding speed is up to
360 GB an hour for SQL Managed Instance. Seeding is performed for a batch of user databases in parallel - not
necessarily for all databases at the same time. Multiple batches may be needed if there are many databases
hosted on the instance.
If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to
be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the
link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for
a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84
GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then
seeding a 100-GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be
executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably
longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth.

Manage geo-failover to a geo-secondary instance


The failover group will manage geo-failover of all databases on the primary managed instance. When a group is
created, each database in the instance will be automatically geo-replicated to the geo-secondary instance. You
can't use failover groups to initiate a partial failover of a subset of databases.

IMPORTANT
If a database is dropped on the primary managed instance, it will also be dropped automatically on the geo-secondary
managed instance.

Use the read-write listener (primary MI)


For read-write workloads, use <fog-name>.zone_id.database.windows.net as the server name. Connections will be
automatically directed to the primary. This name doesn't change after failover. The geo-failover involves
updating the DNS record, so the new client connections are routed to the new primary only after the client DNS
cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application
will be able to reconnect to it using the same server-side SAN certificate. The existing client connections need to
be terminated and then recreated to be routed to the new primary. The read-write listener and read-only listener
cannot be reached via the public endpoint for managed instance.

Use the read-only listener (secondary MI)


If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-
secondary. To connect directly to the geo-secondary, use <fog-name>.secondary.<zone_id>.database.windows.net
as the server name.
In the Business Critical tier, SQL Managed Instance supports the use of read-only replicas to offload read-only
query workloads, using the ApplicationIntent=ReadOnly parameter in the connection string. When you have
configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the
primary location or in the geo-replicated location:
To connect to a read-only replica in the primary location, use ApplicationIntent=ReadOnly and
<fog-name>.<zone_id>.database.windows.net .
To connect to a read-only replica in the secondary location, use ApplicationIntent=ReadOnly and
<fog-name>.secondary.<zone_id>.database.windows.net .

The read-write listener and read-only listener can't be reached via public endpoint for managed instance.

Potential performance degradation after failover


A typical Azure application uses multiple Azure services and consists of multiple components. The automatic
geo-failover of the failover group is triggered based on the state of the Azure SQL components alone. Other
Azure services in the primary region may not be affected by the outage and their components may still be
available in that region. Once the primary databases switch to the secondary region, the latency between the
dependent components may increase. Ensure the redundancy of all the application's components in the
secondary region and fail over application components together with the database so that application's
performance is not affected by higher cross-region latency.

Potential data loss after failover


If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary.
Failover is deferred for the period you specify using GracePeriodWithDataLossHours . If you configured the
automatic failover policy, be prepared for data loss. In general, during outages, Azure favors availability. Setting
GracePeriodWithDataLossHours to a larger number, such as 24 hours, or disabling automatic geo-failover lets you
reduce the likelihood of data loss at the expense of database availability.

DNS update
The DNS update of the read-write listener will happen immediately after the failover is initiated. This operation
won't result in data loss. However, the process of switching database roles can take up to 5 minutes under
normal conditions. Until it's completed, some databases in the new primary instance will still be read-only. If a
failover is initiated using PowerShell, the operation to switch the primary replica role is synchronous. If it's
initiated using the Azure portal, the UI will indicate completion status. If it's initiated using the REST API, use
standard Azure Resource Manager’s polling mechanism to monitor for completion.
IMPORTANT
Use manual planned failover to move the primary back to the original location once the outage that caused the geo-
failover is mitigated.

Enable scenarios dependent on objects from the system databases


System databases are not replicated to the secondary instance in a failover group. To enable scenarios that
depend on objects from the system databases, make sure to create the same objects on the secondary instance
and keep them synchronized with the primary instance.
For example, if you plan to use the same logins on the secondary instance, make sure to create them with the
identical SID.

-- Code to create login on the secondary instance


CREATE LOGIN foo WITH PASSWORD = '<enterStrongPasswordHere>', SID = <login_sid>;

To learn more, see Replication of logins and agent jobs.

Synchronize instance properties and retention policies instances


Instances in a failover group remain separate Azure resources, and no changes made to the configuration of the
primary instance will be automatically replicated to the secondary instance. Make sure to perform all relevant
changes both on primary and secondary instance. For example, if you change backup storage redundancy or
long-term backup retention policy on primary instance, make sure to change it on secondary instance as well.

Scaling instances
You can scale up or scale down the primary and secondary instance to a different compute size within the same
service tier. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the
primary. When scaling down, reverse the order: scale down the primary first, and then scale down the
secondary. When you scale instance to a different service tier, this recommendation is enforced.
The sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets
overloaded and must be re-seeded during an upgrade or downgrade process.

Use failover groups and virtual network service endpoints


If you're using Virtual Network service endpoints and rules to restrict access to your SQL Managed Instance,
note that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable
other regions to accept communication from the subnet. Therefore, only the client applications deployed in the
same region can connect to the primary database.

Prevent loss of critical data


Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism.
Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical
transactions from data loss, an application developer can call the sp_wait_for_database_copy_sync stored
procedure immediately after committing the transaction. Calling sp_wait_for_database_copy_sync blocks the
calling thread until the last committed transaction has been transmitted and hardened in the transaction log of
the secondary database. However, it doesn't wait for the transmitted transactions to be replayed (redone) on the
secondary. sp_wait_for_database_copy_sync is scoped to a specific geo-replication link. Any user with the
connection rights to the primary database can call this procedure.
NOTE
sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee
full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be
significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.

Failover group status


Auto-failover group reports its status describing the current state of the data replication:
Seeding - Initial seeding is taking place after creation of the failover group, until all user databases are
initialized on the secondary instance. Failover process cannot be initiated while auto-failover group is in the
Seeding status, since user databases aren't copied to secondary instance yet.
Synchronizing - the usual status of auto-failover group. It means that data changes on the primary instance
are being replicated asynchronously to the secondary instance. This status doesn't guarantee that the data is
fully synchronized at every moment. There may be data changes from primary still to be replicated to the
secondary due to asynchronous nature of the replication process between instances in the auto-failover
group. Both automatic and manual failovers can be initiated while the auto-failover group is in the Seeding
status.
Failover in progress - this status indicates that either automatically or manually initiated failover process is in
progress. No changes to the failover group or additional failovers can be initiated while the auto-failover
group is in this status.

Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Managed Instance
Contributor has all the necessary permissions to manage failover groups.
For specific permission scopes, review how to configure auto-failover groups in Azure SQL Managed Instance.

Limitations
Be aware of the following limitations:
Failover groups can't be created between two instances in the same Azure region.
Failover groups can't be renamed. You will need to delete the group and re-create it with a different name.
A failover group contains exactly two managed instances. Adding additional instances to the failover group is
unsupported.
An instance can participate only in one failover group at any moment.
Database rename isn't supported for databases in failover group. You will need to temporarily delete failover
group to be able to rename a database.
System databases aren't replicated to the secondary instance in a failover group. Therefore, scenarios that
depend on objects from the system databases such as Server Logins and Agent jobs, require objects to be
manually created on the secondary instances and also manually kept in sync after any changes made on
primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance that is
replicated automatically to secondary instance during creation of failover group. Any subsequent changes of
SMK on the primary instance however will not be replicated to secondary instance. To learn more, see how to
Enable scenarios dependent on objects from the system databases.
Failover groups can't be created between instances if any of them are in an instance pool.
Programmatically manage failover groups
Auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API.
The following tables describe the set of commands available. Active geo-replication includes a set of Azure
Resource Manager APIs for management, including the Azure SQL Database REST API and Azure PowerShell
cmdlets. These APIs require the use of resource groups and support Azure role-based access control (Azure
RBAC). For more information on how to implement access roles, see Azure role-based access control (Azure
RBAC).

PowerShell
Azure CLI
REST API

C M DL ET DESC RIP T IO N

New-AzSqlDatabaseInstanceFailoverGroup This command creates a failover group and registers it on


both primary and secondary instances

Set-AzSqlDatabaseInstanceFailoverGroup Modifies configuration of a failover group

Get-AzSqlDatabaseInstanceFailoverGroup Retrieves a failover group's configuration

Switch-AzSqlDatabaseInstanceFailoverGroup Triggers failover of a failover group to the secondary


instance

Remove-AzSqlDatabaseInstanceFailoverGroup Removes a failover group

Next steps
For detailed tutorials, see
Add a SQL Managed Instance to a failover group
For a sample script, see:
Use PowerShell to create an auto-failover group on a SQL Managed Instance
For a business continuity overview and scenarios, see Business continuity overview
To learn about automated backups, see SQL Database automated backups.
To learn about using automated backups for recovery, see Restore a database from the service-initiated
backups.
T-SQL differences between SQL Server & Azure
SQL Managed Instance
7/12/2022 • 24 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article summarizes and explains the differences in syntax and behavior between Azure SQL Managed
Instance and SQL Server.
SQL Managed Instance provides high compatibility with the SQL Server database engine, and most features are
supported in a SQL Managed Instance.

There are some PaaS limitations that are introduced in SQL Managed Instance and some behavior changes
compared to SQL Server. The differences are divided into the following categories:
Availability includes the differences in Always On Availability Groups and backups.
Security includes the differences in auditing, certificates, credentials, cryptographic providers, logins and
users, and the service key and service master key.
Configuration includes the differences in buffer pool extension, collation, compatibility levels, database
mirroring, database options, SQL Server Agent, and table options.
Functionalities include BULK INSERT/OPENROWSET, CLR, DBCC, distributed transactions, extended events,
external libraries, filestream and FileTable, full-text Semantic Search, linked servers, PolyBase, Replication,
RESTORE, Service Broker, stored procedures, functions, and triggers.
Environment settings such as VNets and subnet configurations.
Most of these features are architectural constraints and represent service features.
Temporary known issues that are discovered in SQL Managed Instance and will be resolved in the future are
described in What's new?.

Availability
Always On Availability Groups
High availability is built into SQL Managed Instance and can't be controlled by users. The following statements
aren't supported:
CREATE ENDPOINT … FOR DATABASE_MIRRORING
CREATE AVAILABILITY GROUP
ALTER AVAILABILITY GROUP
DROP AVAILABILITY GROUP
The SET HADR clause of the ALTER DATABASE statement
Backup
Azure SQL Managed Instance has automatic backups, so users can create full database COPY_ONLY backups.
Differential, log, and file snapshot backups aren't supported.
With a SQL Managed Instance, you can back up an instance database only to an Azure Blob storage account:
Only BACKUP TO URL is supported.
FILE , TAPE , and backup devices aren't supported.
Most of the general WITH options are supported.
COPY_ONLY is mandatory.
FILE_SNAPSHOT isn't supported.
Tape options: REWIND , NOREWIND , UNLOAD , and NOUNLOAD aren't supported.
Log-specific options: NORECOVERY , STANDBY , and NO_TRUNCATE aren't supported.
Limitations:
With a SQL Managed Instance, you can back up an instance database to a backup with up to 32 stripes,
which is enough for databases up to 4 TB if backup compression is used.
You can't execute BACKUP DATABASE ... WITH COPY_ONLY on a database that's encrypted with service-
managed Transparent Data Encryption (TDE). Service-managed TDE forces backups to be encrypted with
an internal TDE key. The key can't be exported, so you can't restore the backup. Use automatic backups
and point-in-time restore, or use customer-managed (BYOK) TDE instead. You also can disable encryption
on the database.
Native backups taken on a SQL Managed Instance cannot be restored to a SQL Server. This is because
SQL Managed Instance has higher internal database version compared to any version of SQL Server.
To back up or restore a database to/from an Azure storage, it is necessary to create a shared access
signature (SAS) an URI that grants you restricted access rights to Azure Storage resources Learn more on
this. Using Access keys for these scenarios is not supported.
The maximum backup stripe size by using the BACKUP command in SQL Managed Instance is 195 GB,
which is the maximum blob size. Increase the number of stripes in the backup command to reduce
individual stripe size and stay within this limit.

TIP
To work around this limitation, when you back up a database from either SQL Server in an on-premises
environment or in a virtual machine, you can:
Back up to DISK instead of backing up to URL .
Upload the backup files to Blob storage.
Restore into SQL Managed Instance.
The Restore command in SQL Managed Instance supports bigger blob sizes in the backup files because a
different blob type is used for storage of the uploaded backup files.
For information about backups using T-SQL, see BACKUP.

Security
Auditing
The key differences between auditing in Microsoft Azure SQL and in SQL Server are:
With SQL Managed Instance, auditing works at the server level. The .xel log files are stored in Azure Blob
storage.
With Azure SQL Database, auditing works at the database level. The .xel log files are stored in Azure Blob
storage.
With SQL Server, on-premises or in virtual machines, auditing works at the server level. Events are stored on
file system or Windows event logs.
XEvent auditing in SQL Managed Instance supports Azure Blob storage targets. File and Windows logs aren't
supported.
The key differences in the CREATE AUDIT syntax for auditing to Azure Blob storage are:
A new syntax TO URL is provided that you can use to specify the URL of the Azure Blob storage container
where the .xel files are placed.
The syntax TO FILE isn't supported because SQL Managed Instance can't access Windows file shares.
For more information, see:
CREATE SERVER AUDIT
ALTER SERVER AUDIT
Auditing
Certificates
SQL Managed Instance can't access file shares and Windows folders, so the following constraints apply:
The CREATE FROM / BACKUP TO file isn't supported for certificates.
The CREATE / BACKUP certificate from FILE / ASSEMBLY isn't supported. Private key files can't be used.

See CREATE CERTIFICATE and BACKUP CERTIFICATE.


Workaround : Instead of creating backup of certificate and restoring the backup, get the certificate binary
content and private key, store it as .sql file, and create from binary:

CREATE CERTIFICATE
FROM BINARY = asn_encoded_certificate
WITH PRIVATE KEY (<private_key_options>)

Credential
Only Azure Key Vault and SHARED ACCESS SIGNATURE identities are supported. Windows users aren't supported.
See CREATE CREDENTIAL and ALTER CREDENTIAL.
Cryptographic providers
SQL Managed Instance can't access files, so cryptographic providers can't be created:
CREATE CRYPTOGRAPHIC PROVIDER isn't supported. See CREATE CRYPTOGRAPHIC PROVIDER.
ALTER CRYPTOGRAPHIC PROVIDER isn't supported. See ALTER CRYPTOGRAPHIC PROVIDER.
Logins and users
SQL logins created by using FROM CERTIFICATE , FROM ASYMMETRIC KEY , and FROM SID are supported. See
CREATE LOGIN.
Azure Active Directory (Azure AD) server principals (logins) created with the CREATE LOGIN syntax or the
CREATE USER FROM LOGIN [Azure AD Login] syntax are supported. These logins are created at the
server level.
SQL Managed Instance supports Azure AD database principals with the syntax
CREATE USER [AADUser/AAD group] FROM EXTERNAL PROVIDER . This feature is also known as Azure AD
contained database users.
Windows logins created with the CREATE LOGIN ... FROM WINDOWS syntax aren't supported. Use Azure
Active Directory logins and users.
The Azure AD admin for the instance has unrestricted admin privileges.
Non-administrator Azure AD database-level users can be created by using the
CREATE USER ... FROM EXTERNAL PROVIDER syntax. See CREATE USER ... FROM EXTERNAL PROVIDER.

Azure AD server principals (logins) support SQL features within one SQL Managed Instance only.
Features that require cross-instance interaction, no matter whether they're within the same Azure AD
tenant or different tenants, aren't supported for Azure AD users. Examples of such features are:
SQL transactional replication.
Link server.
Setting an Azure AD login mapped to an Azure AD group as the database owner isn't supported. A
member of the Azure AD group can be a database owner, even if the login hasn't been created in the
database.
Impersonation of Azure AD server-level principals by using other Azure AD principals is supported, such
as the EXECUTE AS clause. EXECUTE AS limitations are:
EXECUTE AS USER isn't supported for Azure AD users when the name differs from the login name.
An example is when the user is created through the syntax
CREATE USER [myAadUser] FROM LOGIN [john@contoso.com] and impersonation is attempted through
EXEC AS USER = myAadUser . When you create a USER from an Azure AD server principal (login),
specify the user_name as the same login_name from LOGIN .
Only the SQL Server-level principals (logins) that are part of the sysadmin role can execute the
following operations that target Azure AD principals:
EXECUTE AS USER
EXECUTE AS LOGIN
To impersonate a user with EXECUTE AS statement the user needs to be mapped directly to Azure
AD server principal (login). Users that are members of Azure AD groups mapped into Azure AD
server principals cannot effectively be impersonated with EXECUTE AS statement, even though the
caller has the impersonate permissions on the specified user name.
Database export/import using bacpac files are supported for Azure AD users in SQL Managed Instance
using either SSMS V18.4 or later, or SQLPackage.exe.
The following configurations are supported using database bacpac file:
Export/import a database between different manage instances within the same Azure AD
domain.
Export a database from SQL Managed Instance and import to SQL Database within the same
Azure AD domain.
Export a database from SQL Database and import to SQL Managed Instance within the same
Azure AD domain.
Export a database from SQL Managed Instance and import to SQL Server (version 2012 or
later).
In this configuration, all Azure AD users are created as SQL Server database principals
(users) without logins. The type of users is listed as SQL and is visible as SQL_USER in
sys.database_principals ). Their permissions and roles remain in the SQL Server
database metadata and can be used for impersonation. However, they cannot be used to
access and sign in to the SQL Server using their credentials.
Only the server-level principal login, which is created by the SQL Managed Instance provisioning process,
members of the server roles, such as securityadmin or sysadmin , or other logins with ALTER ANY LOGIN
permission at the server level can create Azure AD server principals (logins) in the master database for
SQL Managed Instance.
If the login is a SQL principal, only logins that are part of the sysadmin role can use the create command
to create logins for an Azure AD account.
The Azure AD login must be a member of an Azure AD within the same directory that's used for Azure
SQL Managed Instance.
Azure AD server principals (logins) are visible in Object Explorer starting with SQL Server Management
Studio 18.0 preview 5.
A server principal with sysadmin access level is automatically created for the Azure AD admin account
once it's enabled on an instance.
During authentication, the following sequence is applied to resolve the authenticating principal:
1. If the Azure AD account exists as directly mapped to the Azure AD server principal (login), which is
present in sys.server_principals as type "E," grant access and apply permissions of the Azure AD
server principal (login).
2. If the Azure AD account is a member of an Azure AD group that's mapped to the Azure AD server
principal (login), which is present in sys.server_principals as type "X," grant access and apply
permissions of the Azure AD group login.
3. If the Azure AD account exists as directly mapped to an Azure AD user in a database, which is present
in sys.database_principals as type "E," grant access and apply permissions of the Azure AD database
user.
4. If the Azure AD account is a member of an Azure AD group that's mapped to an Azure AD user in a
database, which is present in sys.database_principals as type "X," grant access and apply permissions
of the Azure AD group user.
Service key and service master key
Master key backup isn't supported (managed by SQL Database service).
Master key restore isn't supported (managed by SQL Database service).
Service master key backup isn't supported (managed by SQL Database service).
Service master key restore isn't supported (managed by SQL Database service).

Configuration
Buffer pool extension
Buffer pool extension isn't supported.
ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION isn't supported. See ALTER SERVER CONFIGURATION.
Collation
The default instance collation is SQL_Latin1_General_CP1_CI_AS and can be specified as a creation parameter. See
Collations.
Compatibility levels
Supported compatibility levels are 100, 110, 120, 130, 140 and 150.
Compatibility levels below 100 aren't supported.
The default compatibility level for new databases is 140. For restored databases, the compatibility level
remains unchanged if it was 100 and above.
See ALTER DATABASE Compatibility Level.
Database mirroring
Database mirroring isn't supported.
ALTER DATABASE SET PARTNER and SET WITNESS options aren't supported.
CREATE ENDPOINT … FOR DATABASE_MIRRORING isn't supported.

For more information, see ALTER DATABASE SET PARTNER and SET WITNESS and CREATE ENDPOINT … FOR
DATABASE_MIRRORING.
Database options
Multiple log files aren't supported.
In-memory objects aren't supported in the General Purpose service tier.
There's a limit of 280 files per General Purpose instance, which implies a maximum of 280 files per database.
Both data and log files in the General Purpose tier are counted toward this limit. The Business Critical tier
supports 32,767 files per database.
The database can't contain filegroups that contain filestream data. Restore fails if .bak contains FILESTREAM
data.
Every file is placed in Azure Blob storage. IO and throughput per file depend on the size of each individual
file.
CREATE DATABASE statement
The following limitations apply to CREATE DATABASE :
Files and filegroups can't be defined.
The CONTAINMENT option isn't supported.
WITH options aren't supported.

TIP
As a workaround, use ALTER DATABASE after CREATE DATABASE to set database options to add files or to set
containment.

The FOR ATTACH option isn't supported.


The AS SNAPSHOT OF option isn't supported.

For more information, see CREATE DATABASE.


ALTER DATABASE statement
Some file properties can't be set or changed:
A file path can't be specified in the ALTER DATABASE ADD FILE (FILENAME='path') T-SQL statement. Remove
FILENAME from the script because SQL Managed Instance automatically places the files.
A file name can't be changed by using the ALTER DATABASE statement.

The following options are set by default and can't be changed:


MULTI_USER
ENABLE_BROKER
AUTO_CLOSE OFF

The following options can't be modified:


AUTO_CLOSE
AUTOMATIC_TUNING(CREATE_INDEX=ON|OFF)
AUTOMATIC_TUNING(DROP_INDEX=ON|OFF)
DISABLE_BROKER
EMERGENCY
ENABLE_BROKER
FILESTREAM
HADR
NEW_BROKER
OFFLINE
PAGE_VERIFY
PARTNER
READ_ONLY
RECOVERY BULK_LOGGED
RECOVERY_SIMPLE
REMOTE_DATA_ARCHIVE
RESTRICTED_USER
SINGLE_USER
WITNESS

Some ALTER DATABASE statements (for example, SET CONTAINMENT) might transiently fail, for example during
the automated database backup or right after a database is created. In this case ALTER DATABASE statement
should be retried. For more information on related error messages, see the Remarks section.
For more information, see ALTER DATABASE.
SQL Server Agent
Enabling and disabling SQL Server Agent is currently not supported in SQL Managed Instance. SQL Agent is
always running.
Job schedule trigger based on an idle CPU is not supported.
SQL Server Agent settings are read only. The procedure sp_set_agent_properties isn't supported in SQL
Managed Instance.
Jobs
T-SQL job steps are supported.
The following replication jobs are supported:
Transaction-log reader
Snapshot
Distributor
SSIS job steps are supported.
Other types of job steps aren't currently supported:
The merge replication job step isn't supported.
Queue Reader isn't supported.
Command shell isn't yet supported.
SQL Managed Instance can't access external resources, for example, network shares via robocopy.
SQL Server Analysis Services isn't supported.
Notifications are partially supported.
Email notification is supported, although it requires that you configure a Database Mail profile. SQL Server
Agent can use only one Database Mail profile, and it must be called AzureManagedInstance_dbmail_profile .
Pager isn't supported.
NetSend isn't supported.
Alerts aren't yet supported.
Proxies aren't supported.
EventLog isn't supported.
User must be directly mapped to Azure AD server principal (login) to create, modify, or execute SQL Agent
jobs. Users that are not directly mapped, for example, users that belong to an Azure AD group that has the
rights to create, modify or execute SQL Agent jobs, will not effectively be able to perform those actions. This
is due to SQL Managed Instance impersonation and EXECUTE AS limitations.
The Multi Server Administration feature for master/target (MSX/TSX) jobs are not supported.
For information about SQL Server Agent, see SQL Server Agent.
Tables
The following table types aren't supported:
FILESTREAM
FILETABLE
EXTERNAL TABLE (except Polybase, in preview)
MEMORY_OPTIMIZED (not supported only in General Purpose tier)
For information about how to create and alter tables, see CREATE TABLE and ALTER TABLE.

Functionalities
Bulk insert / OPENROWSET
SQL Managed Instance can't access file shares and Windows folders, so the files must be imported from Azure
Blob storage:
DATASOURCE is required in the BULK INSERT command while you import files from Azure Blob storage. See
BULK INSERT.
DATASOURCE is required in the OPENROWSET function when you read the content of a file from Azure Blob
storage. See OPENROWSET.
OPENROWSET can be used to read data from Azure SQL Database, Azure SQL Managed Instance, or SQL Server
instances. Other sources such as Oracle databases or Excel files are not supported.
CLR
A SQL Managed Instance can't access file shares and Windows folders, so the following constraints apply:
Only CREATE ASSEMBLY FROM BINARY is supported. See CREATE ASSEMBLY FROM BINARY.
CREATE ASSEMBLY FROM FILE isn't supported. See CREATE ASSEMBLY FROM FILE.
ALTER ASSEMBLY can't reference files. See ALTER ASSEMBLY.

Database Mail (db_mail)


sp_send_dbmail cannot send attachments using @file_attachments parameter. Local file system and external
shares or Azure Blob Storage are not accessible from this procedure.
See the known issues related to @query parameter and authentication.
DBCC
Undocumented DBCC statements that are enabled in SQL Server aren't supported in SQL Managed Instance.
Only a limited number of Global Trace flags are supported. Session-level Trace flags aren't supported. See
Trace flags.
DBCC TRACEOFF and DBCC TRACEON work with the limited number of global trace-flags.
DBCC CHECKDB with options REPAIR_ALLOW_DATA_LOSS, REPAIR_FAST, and REPAIR_REBUILD cannot be
used because database cannot be set in SINGLE_USER mode - see ALTER DATABASE differences. Potential
database corruption is handled by the Azure support team. Contact Azure support if there is any indication of
database corruption.
Distributed transactions
Partial support for distributed transactions is currently in public preview. Distributed transactions are supported
under following conditions (all of them must be met):
all transaction participants are Azure SQL Managed Instances that are part of the Server trust group.
transactions are initiated either from .NET (TransactionScope class) or Transact-SQL.
Azure SQL Managed Instance currently does not support other scenarios that are regularly supported by
MSDTC on-premises or in Azure Virtual Machines.
Extended Events
Some Windows-specific targets for Extended Events (XEvents) aren't supported:
The etw_classic_sync target isn't supported. Store .xel files in Azure Blob storage. See etw_classic_sync
target.
The event_file target isn't supported. Store .xel files in Azure Blob storage. See event_file target.
External libraries
In-database R and Python external libraries are supported in limited public preview. See Machine Learning
Services in Azure SQL Managed Instance (preview).
Filestream and FileTable
Filestream data isn't supported.
The database can't contain filegroups with FILESTREAM data.
FILETABLE isn't supported.
Tables can't have FILESTREAM types.
The following functions aren't supported:
GetPathLocator()
GET_FILESTREAM_TRANSACTION_CONTEXT()
PathName()
GetFileNamespacePat)
FileTableRootPath()

For more information, see FILESTREAM and FileTables.


Full-text Semantic Search
Semantic Search isn't supported.
Linked servers
Linked servers in SQL Managed Instance support a limited number of targets:
Supported targets are SQL Managed Instance, SQL Database, Azure Synapse SQL serverless and dedicated
pools, and SQL Server instances.
Distributed writable transactions are possible only among SQL Managed Instances. For more information,
see Distributed Transactions. However, MS DTC is not supported.
Targets that aren't supported are files, Analysis Services, and other RDBMS. Try to use native CSV import
from Azure Blob Storage using BULK INSERT or OPENROWSET as an alternative for file import, or load files
using a serverless SQL pool in Azure Synapse Analytics.
Operations:
Cross-instance write transactions are supported only for SQL Managed Instances.
sp_dropserver is supported for dropping a linked server. See sp_dropserver.
The OPENROWSET function can be used to execute queries only on SQL Server instances. They can be either
managed, on-premises, or in virtual machines. See OPENROWSET.
The OPENDATASOURCE function can be used to execute queries only on SQL Server instances. They can be
either managed, on-premises, or in virtual machines. Only the SQLNCLI , SQLNCLI11 , and SQLOLEDB values are
supported as a provider. An example is
SELECT * FROM OPENDATASOURCE('SQLNCLI', '...').AdventureWorks2012.HumanResources.Employee . See
OPENDATASOURCE.
Linked servers cannot be used to read files (Excel, CSV) from the network shares. Try to use BULK INSERT,
OPENROWSET that reads CSV files from Azure Blob Storage, or a linked server that references a serverless
SQL pool in Synapse Analytics. Track this requests on SQL Managed Instance Feedback item|
Linked servers on Azure SQL Managed Instance support SQL authentication and Azure AD authentication.
PolyBase
Work on enabling Polybase support in SQL Managed Instance is in progress. In the meantime, as a workaround
you can use linked servers to a serverless SQL pool in Synapse Analytics or SQL Server to query data from files
stored in Azure Data Lake or Azure Storage.
For general information about PolyBase, see PolyBase.
Replication
Snapshot and Bi-directional replication types are supported. Merge replication, Peer-to-peer replication, and
updatable subscriptions are not supported.
Transactional Replication is available for public preview on SQL Managed Instance with some constraints:
All types of replication participants (Publisher, Distributor, Pull Subscriber, and Push Subscriber) can be
placed on SQL Managed Instance, but the publisher and the distributor must be either both in the
cloud or both on-premises.
SQL Managed Instance can communicate with the recent versions of SQL Server. See the supported
versions matrix for more information.
Transactional Replication has some additional networking requirements.
For more information about configuring transactional replication, see the following tutorials:
Replication between a SQL MI publisher and SQL MI subscriber
Replication between an SQL MI publisher, SQL MI distributor, and SQL Server subscriber
RESTORE statement
Supported syntax:
RESTORE DATABASE
RESTORE FILELISTONLY ONLY
RESTORE HEADER ONLY
RESTORE LABELONLY ONLY
RESTORE VERIFYONLY ONLY
Unsupported syntax:
RESTORE LOG ONLY
RESTORE REWINDONLY ONLY
Source:
FROM URL (Azure Blob storage) is the only supported option.
FROM DISK / TAPE /backup device isn't supported.
Backup sets aren't supported.
WITH options aren't supported. Restore attempts including WITH like DIFFERENTIAL , STATS , REPLACE , etc.,
will fail.
ASYNC RESTORE : Restore continues even if the client connection breaks. If your connection is dropped, you can
check the sys.dm_operation_status view for the status of a restore operation, and for a CREATE and DROP
database. See sys.dm_operation_status.
The following database options are set or overridden and can't be changed later:
NEW_BROKER if the broker isn't enabled in the .bak file.
ENABLE_BROKER if the broker isn't enabled in the .bak file.
AUTO_CLOSE=OFF if a database in the .bak file has AUTO_CLOSE=ON .
RECOVERY FULL if a database in the .bak file has SIMPLE or BULK_LOGGED recovery mode.
A memory-optimized filegroup is added and called XTP if it wasn't in the source .bak file.
Any existing memory-optimized filegroup is renamed to XTP.
SINGLE_USER and RESTRICTED_USER options are converted to MULTI_USER .

Limitations:
Backups of the corrupted databases might be restored depending on the type of the corruption, but
automated backups will not be taken until the corruption is fixed. Make sure that you run DBCC CHECKDB on
the source SQL Managed Instance and use backup WITH CHECKSUM in order to prevent this issue.
Restore of .BAK file of a database that contains any limitation described in this document (for example,
FILESTREAM or FILETABLE objects) cannot be restored on SQL Managed Instance.
.BAK files that contain multiple backup sets can't be restored.
.BAK files that contain multiple log files can't be restored.
Backups that contain databases bigger than 8 TB, active in-memory OLTP objects, or number of files that
would exceed 280 files per instance can't be restored on a General Purpose instance.
Backups that contain databases bigger than 4 TB or in-memory OLTP objects with the total size larger than
the size described in resource limits cannot be restored on Business Critical instance. For information about
restore statements, see RESTORE statements.

IMPORTANT
The same limitations apply to built-in point-in-time restore operation. As an example, General Purpose database greater
than 4 TB cannot be restored on Business Critical instance. Business Critical database with In-memory OLTP files or more
than 280 files cannot be restored on General Purpose instance.

Service broker
Cross-instance service broker message exchange is supported only between Azure SQL Managed Instances:
CREATE ROUTE : You can't use CREATE ROUTE with ADDRESS other than LOCAL or DNS name of another SQL
Managed Instance. Port is always 4022.
ALTER ROUTE : You can't use ALTER ROUTE with ADDRESS other than LOCAL or DNS name of another SQL
Managed Instance. Port is always 4022.
Transport security is supported, dialog security is not:
CREATE REMOTE SERVICE BINDING is not supported.
Service broker is enabled by default and cannot be disabled. The following ALTER DATABASE options are not
supported:
ENABLE_BROKER
DISABLE_BROKER

Stored procedures, functions, and triggers


NATIVE_COMPILATION isn't supported in the General Purpose tier.
The following sp_configure options aren't supported:
allow polybase export
allow updates
filestream_access_level
remote access
remote data archive
remote proc trans
scan for startup procs
The following sp_configure options are ignored and have no effect:
Ole Automation Procedures
sp_execute_external_scripts isn't supported. See sp_execute_external_scripts.
xp_cmdshell isn't supported. See xp_cmdshell.
Extended stored procedures aren't supported, and this includes sp_addextendedproc and
sp_dropextendedproc . This functionality won't be supported because it's on a deprecation path for SQL
Server. For more information, see Extended Stored Procedures.
sp_attach_db , sp_attach_single_file_db , and sp_detach_db aren't supported. See sp_attach_db,
sp_attach_single_file_db, and sp_detach_db.
System functions and variables
The following variables, functions, and views return different results:
SERVERPROPERTY('EngineEdition') returns the value 8. This property uniquely identifies a SQL Managed
Instance. See SERVERPROPERTY.
SERVERPROPERTY('InstanceName') returns NULL because the concept of instance as it exists for SQL Server
doesn't apply to SQL Managed Instance. See SERVERPROPERTY('InstanceName').
@@SERVERNAME returns a full DNS "connectable" name, for example,
my-managed-instance.wcus17662feb9ce98.database.windows.net . See @@SERVERNAME.
SYS.SERVERS returns a full DNS "connectable" name, such as myinstance.domain.database.windows.net for the
properties "name" and "data_source." See SYS.SERVERS.
@@SERVICENAME returns NULL because the concept of service as it exists for SQL Server doesn't apply to SQL
Managed Instance. See @@SERVICENAME.
SUSER_ID is supported. It returns NULL if the Azure AD login isn't in sys.syslogins . See SUSER_ID.
SUSER_SID isn't supported. The wrong data is returned, which is a temporary known issue. See SUSER_SID.
Environment constraints
Subnet
You cannot place any other resources (for example virtual machines) in the subnet where you have deployed
your SQL Managed Instance. Deploy these resources using a different subnet.
Subnet must have sufficient number of available IP addresses. Minimum is to have at least 32 IP addresses in
the subnet.
The number of vCores and types of instances that you can deploy in a region have some constraints and
limits.
There is a networking configuration that must be applied on the subnet.
VNET
VNet can be deployed using Resource Model - Classic Model for VNet is not supported.
After a SQL Managed Instance is created, moving the SQL Managed Instance or VNet to another resource
group or subscription is not supported.
For SQL Managed Instances hosted in virtual clusters that are created before September 22, 2020, global
peering is not supported. You can connect to these resources via ExpressRoute or VNet-to-VNet through
VNet Gateways.
Failover groups
System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that
depend on objects from the system databases will be impossible on the secondary instance unless the objects
are manually created on the secondary.
TEMPDB
The maximum file size of the tempdb system database can't be greater than 24 GB per core on a General
Purpose tier. The maximum tempdb size on a Business Critical tier is limited by the SQL Managed Instance
storage size. Tempdb log file size is limited to 120 GB on General Purpose tier. Some queries might return an
error if they need more than 24 GB per core in tempdb or if they produce more than 120 GB of log data.
Tempdb is always split into 12 data files: 1 primary, also called master, data file and 11 non-primary data files.
The file structure cannot be changed and new files cannot be added to tempdb .
Memory-optimized tempdb metadata, a new SQL Server 2019 in-memory database feature, is not
supported.
Objects created in the model database cannot be auto-created in tempdb after a restart or a failover because
tempdb does not get its initial object list from the model database. You must create objects in tempdb
manually after each restart or a failover.
MSDB
The following schemas in the msdb system database in SQL Managed Instance must be owned by their
respective predefined roles:
General roles
TargetServersRole
Fixed database roles
SQLAgentUserRole
SQLAgentReaderRole
SQLAgentOperatorRole
DatabaseMail roles:
DatabaseMailUserRole
Integration services roles:
db_ssisadmin
db_ssisltduser
db_ssisoperator

IMPORTANT
Changing the predefined role names, schema names and schema owners by customers will impact the normal operation
of the service. Any changes made to these will be reverted back to the predefined values as soon as detected, or at the
next service update at the latest to ensure normal service operation.

Error logs
SQL Managed Instance places verbose information in error logs. There are many internal system events that are
logged in the error log. Use a custom procedure to read error logs that filters out some irrelevant entries. For
more information, see SQL Managed Instance – sp_readmierrorlog or SQL Managed Instance
extension(preview) for Azure Data Studio.

Next steps
For more information about SQL Managed Instance, see What is SQL Managed Instance?
For a features and comparison list, see Azure SQL Managed Instance feature comparison.
For release updates, see What's new?.
For issues, workarounds, and resolutions, see Known issues.
For a quickstart that shows you how to create a new SQL Managed Instance, see Create a SQL Managed
Instance.
Transactional replication with Azure SQL Managed
Instance (Preview)
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Transactional replication is a feature of Azure SQL Managed Instance and SQL Server that enables you to
replicate data from a table in Azure SQL Managed Instance or a SQL Server instance to tables placed on remote
databases. This feature allows you to synchronize multiple tables in different databases.
Transactional replication is currently in public preview for SQL Managed Instance.

Overview
You can use transactional replication to push changes made in an Azure SQL Managed Instance to:
A SQL Server database - on-premises or on Azure VM
A database in Azure SQL Database
An instance database in Azure SQL Managed Instance

NOTE
To use all the features of Azure SQL Managed Instance, you must be using the latest versions of SQL Server
Management Studio (SSMS) and SQL Server Data Tools (SSDT).

Components
The key components in transactional replication are the Publisher , Distributor , and Subscriber , as shown in
the following picture:
RO L E A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Publisher No Yes

Distributor No Yes

Pull subscriber No Yes

Push Subscriber Yes Yes

The Publisher publishes changes made on some tables (articles) by sending the updates to the Distributor. The
publisher can be an Azure SQL Managed Instance or a SQL Server instance.
The Distributor collects changes in the articles from a Publisher and distributes them to the Subscribers. The
Distributor can be either a Azure SQL Managed Instance or a SQL Server instance (any version as long it is equal
to or higher than the Publisher version).
The Subscriber receives changes made on the Publisher. A SQL Server instance and Azure SQL Managed
Instance can both be push and pull subscribers, though a pull subscription is not supported when the distributor
is an Azure SQL Managed Instance and the subscriber is not. A database in Azure SQL Database can only be a
push subscriber.
Azure SQL Managed Instance can support being a Subscriber from the following versions of SQL Server:
SQL Server 2016 and later
SQL Server 2014 RTM CU10 (12.0.4427.24) or SP1 CU3 (12.0.2556.4)
SQL Server 2012 SP2 CU8 (11.0.5634.1) or SP3 (11.0.6020.0) or SP4 (11.0.7001.0)

NOTE
For other versions of SQL Server that do not support publishing to objects in Azure, it is possible to utilize the
republishing data method to move data to newer versions of SQL Server.
Attempting to configure replication using an older version can result in error number MSSQL_REPL20084 (The
process could not connect to Subscriber.) and MSSQ_REPL40532 (Cannot open server <name> requested by
the login. The login failed.)

Types of replication
There are different types of replication:

REP L IC AT IO N A Z URE SQ L DATA B A SE A Z URE SQ L M A N A GED IN STA N C E

Standard Transactional Yes (only as subscriber) Yes

Snapshot Yes (only as subscriber) Yes

Merge replication No No

Peer-to-peer No No

Bidirectional No Yes

Updatable subscriptions No No
Supportability Matrix
The transactional replication supportability matrix for Azure SQL Managed Instance is the same as the one for
SQL Server.

P UB L ISH ER DIST RIB UTO R SUB SC RIB ER

SQL Server 2019 SQL Server 2019 SQL Server 2019


SQL Server 2017
SQL Server 2016

SQL Server 2017 SQL Server 2019 SQL Server 2019


SQL Server 2017 SQL Server 2017
SQL Server 2016
SQL Server 2014

SQL Server 2016 SQL Server 2019 SQL Server 2019


SQL Server 2017 SQL Server 2017
SQL Server 2016 SQL Server 2016
SQL Server 2014
SQL Server 2012

SQL Server 2014 SQL Server 2019 SQL Server 2017


SQL Server 2017 SQL Server 2016
SQL Server 2016 SQL Server 2014
SQL Server 2014 SQL Server 2012
SQL Server 2008 R2
SQL Server 2008

SQL Server 2012 SQL Server 2019 SQL Server 2016


SQL Server 2017 SQL Server 2014
SQL Server 2016 SQL Server 2012
SQL Server 2014 SQL Server 2008 R2
SQL Server 2012 SQL Server 2008

SQL Server 2008 R2 SQL Server 2019 SQL Server 2014


SQL Server 2008 SQL Server 2017 SQL Server 2012
SQL Server 2016 SQL Server 2008 R2
SQL Server 2014 SQL Server 2008
SQL Server 2012
SQL Server 2008 R2
SQL Server 2008

When to use
Transactional replication is useful in the following scenarios:
Publish changes made in one or more tables in a database and distribute them to one or many databases in
a SQL Server instance or Azure SQL Database that subscribed for the changes.
Keep several distributed databases in synchronized state.
Migrate databases from one SQL Server instance or Azure SQL Managed Instance to another database by
continuously publishing the changes.
Compare Data Sync with Transactional Replication
C AT EGO RY DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N
C AT EGO RY DATA SY N C T RA N SA C T IO N A L REP L IC AT IO N

Advantages - Active-active support - Lower latency


- Bi-directional between on-premises - Transactional consistency
and Azure SQL Database - Reuse existing topology after
migration

Disadvantages - No transactional consistency - Can’t publish from Azure SQL


- Higher performance impact Database
- High maintenance cost

Common configurations
In general, the publisher and the distributor must be either in the cloud or on-premises. The following
configurations are supported:
Publisher with local Distributor on SQL Managed Instance

Publisher and distributor are configured within a single SQL Managed Instance and distributing changes to
another SQL Managed Instance, SQL Database, or SQL Server instance.
Publisher with remote distributor on SQL Managed Instance
In this configuration, one managed instance publishes changes to a distributor placed on another SQL Managed
Instance that can serve many source SQL Managed Instances and distribute changes to one or many targets on
Azure SQL Database, Azure SQL Managed Instance, or SQL Server.

Publisher and distributor are configured on two managed instances. There are some constraints with this
configuration:
Both managed instances are on the same vNet.
Both managed instances are in the same location.
On-premises Publisher/Distributor with remote subscriber

In this configuration, a database in Azure SQL Database or Azure SQL Managed Instance is a subscriber. This
configuration supports migration from on-premises to Azure. If a subscriber is a database in Azure SQL
Database, it must be in push mode.

Requirements
Use SQL Authentication for connectivity between replication participants.
Use an Azure Storage Account share for the working directory used by replication.
Open TCP outbound port 445 in the subnet security rules to access the Azure file share.
Open TCP outbound port 1433 when the SQL Managed Instance is the Publisher/Distributor, and the
Subscriber is not. You may also need to change the SQL Managed Instance NSG outbound security rule for
allow_linkedserver_outbound for the port 1433 Destination Ser vice tag from virtualnetwork to
internet .
Place both the publisher and distributor in the cloud, or both on-premises.
Configure VPN peering between the virtual networks of replication participants if the virtual networks are
different.

NOTE
You may encounter error 53 when connecting to an Azure Storage File if the outbound network security group (NSG) port
445 is blocked when the distributor is an Azure SQL Managed Instance database and the subscriber is on-premises.
Update the vNet NSG to resolve this issue.

With failover groups


If a publisher or distributor SQL Managed Instance is in a failover group, the SQL Managed Instance
administrator must clean up all publications on the old primary and reconfigure them on the new primary after
a failover occurs. The following activities are needed in this scenario:
1. Stop all replication jobs running on the database, if there are any.
2. Drop subscription metadata from publisher by running the following script on publisher database:

EXEC sp_dropsubscription @publication='<name of publication>', @article='all',@subscriber='<name of


subscriber>'

3. Drop subscription metadata from the subscriber. Run the following script on the subscription database on
subscriber SQL Managed Instance:

EXEC sp_subscription_cleanup
@publisher = N'<full DNS of publisher, e.g. example.ac2d23028af5.database.windows.net>',
@publisher_db = N'<publisher database>',
@publication = N'<name of publication>';

4. Forcefully drop all replication objects from publisher by running the following script in the published
database:

EXEC sp_removedbreplication

5. Forcefully drop old distributor from original primary SQL Managed Instance (if failing back over to an old
primary that used to have a distributor). Run the following script on the master database in old
distributor SQL Managed Instance:

EXEC sp_dropdistributor 1,1

If a subscriber SQL Managed Instance is in a failover group, the publication should be configured to connect to
the failover group listener endpoint for the subscriber managed instance. In the event of a failover, subsequent
action by the managed instance administrator depends on the type of failover that occurred:
For a failover with no data loss, replication will continue working after failover.
For a failover with data loss, replication will work as well. It will replicate the lost changes again.
For a failover with data loss, but the data loss is outside of the distribution database retention period, the SQL
Managed Instance administrator will need to reinitialize the subscription database.

Next steps
For more information about configuring transactional replication, see the following tutorials:
Configure replication between a SQL Managed Instance publisher and subscriber
Configure replication between a SQL Managed Instance publisher, SQL Managed Instance distributor, and
SQL Server subscriber
Create a publication.
Create a push subscription by using the server name as the subscriber (for example
N'azuresqldbdns.database.windows.net and the database in Azure SQL Database name as the destination
database (for example, Adventureworks . )

See also
Replication with a SQL Managed Instance and a failover group
Replication to SQL Database
Replication to managed instance
Create a Publication
Create a Push Subscription
Types of Replication
Monitoring (Replication)
Initialize a Subscription
Link feature for Azure SQL Managed Instance
(preview)
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


The new link feature in Azure SQL ManagedInstance connects your SQL Servers hosted anywhere to SQL
Managed Instance, providing hybrid flexibility and database mobility. With an approach that uses near real-time
data replication to the cloud, you can offload workloads to a read-only secondary in Azure to take advantage of
Azure-only features, performance, and scale.
After a disastrous event, you can continue running your read-only workloads on SQL Managed Instance in
Azure. You can also choose to migrate one or more applications from SQL Server to SQL Managed Instance at
the same time, at your own pace, and with the best possible minimum downtime compared to other solutions in
Azure today.
If you have product improvement suggestions, comments, or you want to report issues, the best way to contact
our team is through SQL Managed Instance link user feedback.

Requirements
To use the link feature, you'll need a supported version of SQL Server. The following table lists the supported
versions.

SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T

SQL Server 2022 (16.x) Evaluation Edition Windows Server Must sign up at
Preview https://aka.ms/mi-link-
2022-signup to participate
in preview experience.

SQL Server 2019 (15.x) Enterprise or Developer Windows Server SQL Server 2019 CU15
(KB5008996), or above

SQL Server 2016 (13.x) Enterprise, Standard, or Windows Server SQL Server 2016 SP3 (KB
Developer 5003279) and SQL Server
2016 Azure Connect pack
(KB 5014242)

In addition to the supported version, you'll need:


Network connectivity between your SQL Server and managed instance is required. If your SQL Server is
running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either
deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two
separate subnets.
Azure SQL Managed Instance provisioned on any service tier.
You'll also need the following tooling:
TO O L N OT ES

SSMS 18.12, or higher SQL Server Management Studio (SSMS) is the easiest way to
use SQL Managed Instance link. Provides graphical wizards
for automated link setup and failover for SQL Servers 2016,
2019 and 2022.

Az.SQL 3.9.0, or higher PowerShell module is required for manual configuration


steps.

NOTE
SQL Managed Instance link feature is available in all public Azure regions.
National clouds are currently not supported.

Overview
The underlying technology of near real-time data replication between SQL Server and SQL Managed Instance is
based on distributed availability groups, part of the well-known and proven Always On availability group
technology stack. Extend your SQL Server on-premises availability group to SQL Managed Instance in Azure in a
safe and secure manner.
There's no need to have an existing availability group or multiple nodes. The link supports single node SQL
Server instances without existing availability groups, and also multiple-node SQL Server instances with existing
availability groups. Through the link, you can use the modern benefits of Azure without migrating your entire
SQL Server data estate to the cloud.
You can keep running the link for as long as you need it, for months and even years at a time. And for your
modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably improved
migration experience with the minimum possible downtime compared to all other options available today,
providing a true online migration to SQL Managed Instance.

Supported scenarios
Data replicated through the link feature from SQL Server to Azure SQL Managed Instance can be used with
several scenarios, such as:
Use Azure ser vices without migrating to the cloud
Offload read-only workloads to Azure
Migrate to Azure
Use Azure services
Use the link feature to leverage Azure services using SQL Server data without migrating to the cloud. Examples
include reporting, analytics, backups, machine learning, and other jobs that send data to Azure.
Offload workloads to Azure
You can also use the link feature to offload workloads to Azure. For example, an application could use SQL
Server for read-write workloads, while offloading read-only workloads to SQL Managed Instance in any Azure
region worldwide. Once the link is established, the primary database on SQL Server is read/write accessible,
while replicated data to SQL Managed Instance in Azure is read-only accessible. This allows for various scenarios
where replicated databases on SQL Managed Instance can be used for read scale-out and offloading read-only
workloads to Azure. SQL Managed Instance, in parallel, can also host independent read/write databases. This
allows for copying the replicated database to another read/write database on the same managed instance for
further data processing.
The link is database scoped (one link per one database), allowing for consolidation and deconsolidation of
workloads in Azure. For example, you can replicate databases from multiple SQL Servers to a single SQL
Managed Instance in Azure (consolidation), or replicate databases from a single SQL Server to multiple
managed instances via a 1 to 1 relationship between a database and a managed instance - to any of Azure's
regions worldwide (deconsolidation). The latter provides you with an efficient way to quickly bring your
workloads closer to your customers in any region worldwide, which you can use as read-only replicas.
Migrate to Azure
The link feature also facilitates migrating from SQL Server to SQL Managed Instance, enabling:
The most performant minimum downtime migration compared to all other solutions available today
True online migration to SQL Managed Instance in any service tier
Since the link feature enables minimum downtime migration, you can migrate to your managed instance while
maintaining your primary workload online. While online migration was possible to achieve previously with
other solutions when migrating to the General Purpose service tier, the link feature now also allows for true
online migrations to the Business Critical service tier as well.

How it works
The underlying technology behind the link feature for SQL Managed Instance is distributed availability groups.
The solution supports single-node systems without existing availability groups, or multiple node systems with
existing availability groups.
Secure connectivity, such as VPN or Express Route is used between an on-premises network and Azure. If SQL
Server is hosted on an Azure VM, the internal Azure backbone can be used between the VM and managed
instance – such as, for example, global VNet peering. The trust between the two systems is established using
certificate-based authentication, in which SQL Server and SQL Managed Instance exchange their public keys.
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed
Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this
time. Likewise, a single SQL Server can establish multiple parallel database replication links with several
managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed
instance. The feature requires CU13 or higher to be installed on SQL Server 2019.

Use the link feature


To help you set up initial environment, we've prepared the following online guide on how to prepare your SQL
Server environment to use with the link feature for SQL Managed Instance:
Prepare environment for the link
Once you've ensured the pre-requirements have been met, you can create the link using the automated wizard
in SSMS, or you can choose to set up the link manually using scripts. Create the link using one of the following
instructions:
Replicate database with link feature in SSMS, or alternatively
Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
Once the link has been created, ensure that you follow the best practices for maintaining the link, by following
instructions described at this page:
Best practices with link feature for Azure SQL Managed Instance
If and when you're ready to migrate a database to Azure with a minimum downtime, you can do this using an
automated wizard in SSMS, or you can choose to do this manually with scripts. Migrate database to Azure link
using one of the following instructions:
Failover database with link feature in SSMS, or alternatively
Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts

Limitations
This section describes the product’s functional limitations.
General functional limitations
Managed Instance link has a set of general limitations, and those are listed in this section. Listed limitations are
of a technical nature and are unlikely to be addressed in the foreseeable future.
Only user databases can be replicated. Replication of system databases isn't supported.
The solution doesn't replicate server level objects, agent jobs, nor user logins from SQL Server to SQL
Managed Instance.
Only one database can be placed into a single Availability Group per one Distributed Availability Group link.
Link can't be established between SQL Server and SQL Managed Instance if functionality used on SQL Server
isn't supported on SQL Managed Instance.
File tables and file streams aren't supported for replication, as SQL Managed Instance doesn't support
this.
Replicating Databases using Hekaton (In-Memory OLTP) isn't supported on SQL Managed Instance
General Purpose service tier. Hekaton is only supported on SQL Managed Instance Business Critical
service tier.
For the full list of differences between SQL Server and SQL Managed Instance, see this article.
If Change data capture (CDC), log shipping, or service broker is used with databases replicated on the SQL
Server, the database is migrated to SQL Managed Instance, during failover to Azure, clients will need to
connect using the instance name of the current global primary replica. These settings should be manually
reconfigured.
If transactional replication is used with a database on SQL Server in the case of a migration scenario, during
failover to Azure, transactional replication on SQL Managed Instance will fail and should be manually
reconfigured.
In case distributed transactions are used with database replicated from the SQL Server, and in case of
migration scenario, on the cutover to the cloud, the DTC capabilities won't be transferred. There will be no
possibility for migrated database to get involved in distributed transactions with SQL Server, as SQL
Managed Instance doesn't support distributed transactions with SQL Server at this time. For reference, SQL
Managed Instance today supports distributed transactions only between other SQL Managed Instances, see
this article.
Managed Instance link can replicate database of any size if it fits into chosen storage size of target SQL
Managed Instance.
Client Windows OS 10 and 11 cannot be used to host your SQL Server, as it will not be possible to enable
Always On required for the link. SQL Server must be hosted on Windows Server 2012 or higher.
SQL Server 2008, 2012 and 2014 cannot be supported for the link feature, as SQL engines of these releases
do not have built-in support for Always On, required for the link. Upgrade to a newer version of SQL Server
is required to be able to use the link.
Preview limitations
Some Managed Instance link features and capabilities are limited at this time . Details can be found in the
following list:
Product version requirements as listed in Requirements. At this time SQL Server 2017 (14.x) is not
supported.
Private endpoint (VPN/VNET) is supported to establish the link with SQL Managed Instance. Public endpoint
can't be used to establish the link with SQL Managed Instance.
Managed Instance link authentication between SQL Server instance and SQL Managed Instance is certificate-
based, available only through exchange of certificates. Windows authentication between SQL Server and
managed instance isn't supported.
Replication of user databases from SQL Server to SQL Managed Instance is one-way. User databases from
SQL Managed Instance can't be replicated back to SQL Server.
Auto failover groups replication to secondary SQL Managed Instance can't be used in parallel while
operating the Managed Instance link with SQL Server.
Replicated R/O databases aren't part of auto-backup process on SQL Managed Instance.

Next steps
If you're interested in using Link feature for Azure SQL Managed Instance with versions and editions that are
currently not supported, sign-up here.
For more information on the link feature, see the following:
Managed Instance link – connecting SQL Server to Azure reimagined.
Prepare for SQL Managed Instance link.
Use SQL Managed Instance link via SSMS to replicate database.
Use SQL Managed Instance link via SSMS to migrate database.
For other replication scenarios, consider:
Transactional replication with Azure SQL Managed Instance (Preview)
What is an Azure SQL Managed Instance pool
(preview)?
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Instance pools in Azure SQL Managed Instance provide a convenient and cost-efficient way to migrate smaller
SQL Server instances to the cloud at scale.
Instance pools allow you to pre-provision compute resources according to your total migration requirements.
You can then deploy several individual managed instances up to your pre-provisioned compute level. For
example, if you pre-provision 8 vCores you can deploy two 2-vCore and one 4-vCore instance, and then migrate
databases into these instances. Prior to instance pools being available, smaller and less compute-intensive
workloads would often have to be consolidated into a larger managed instance when migrating to the cloud.
The need to migrate groups of databases to a large instance typically required careful capacity planning and
resource governance, additional security considerations, and some extra data consolidation work at the instance
level.
Additionally, instance pools support native VNet integration so you can deploy multiple instance pools and
multiple single instances in the same subnet.

Key capabilities
Instance pools provide the following benefits:
1. Ability to host 2-vCore instances. *Only for instances in the instance pools.
2. Predictable and fast instance deployment time (up to 5 minutes).
3. Minimal IP address allocation.
The following diagram illustrates an instance pool with multiple managed instances deployed within a virtual
network subnet.
Instance pools enable deployment of multiple instances on the same virtual machine, where the virtual
machine's compute size is based on the total number of vCores allocated for the pool. This architecture allows
partitioning of the virtual machine into multiple instances, which can be any supported size, including 2 vCores
(2-vCore instances are only available for instances in pools).
After initial deployment, management operations on instances in a pool are much faster. This is because the
deployment or extension of a virtual cluster (dedicated set of virtual machines) is not part of provisioning the
managed instance.
Because all instances in a pool share the same virtual machine, the total IP allocation does not depend on the
number of instances deployed, which is convenient for deployment in subnets with a narrow IP range.
Each pool has a fixed IP allocation of only nine IP addresses (not including the five IP addresses in the subnet
that are reserved for its own needs). For details, see the subnet size requirements for single instances.

Application scenarios
The following list provides the main use cases where instance pools should be considered:
Migration of a group of SQL Server instances at the same time, where the majority is a smaller size (for
example 2 or 4 vCores).
Scenarios where predictable and short instance creation or scaling is important. For example, deployment of
a new tenant in a multi-tenant SaaS application environment that requires instance-level capabilities.
Scenarios where having a fixed cost or spending limit is important. For example, running shared dev-test or
demo environments of a fixed (or infrequently changing) size, where you periodically deploy managed
instances when needed.
Scenarios where minimal IP address allocation in a VNet subnet is important. All instances in a pool are
sharing a virtual machine, so the number of allocated IP addresses is lower than in the case of single
instances.

Architecture
Instance pools have a similar architecture to regular (single) managed instances. To support deployments within
Azure virtual networks and to provide isolation and security for customers, instance pools also rely on virtual
clusters. Virtual clusters represent a dedicated set of isolated virtual machines deployed inside the customer's
virtual network subnet.
The main difference between the two deployment models is that instance pools allow multiple SQL Server
process deployments on the same virtual machine node, which are resource governed using Windows job
objects, while single instances are always alone on a virtual machine node.
The following diagram shows an instance pool and two individual instances deployed in the same subnet and
illustrates the main architectural details for both deployment models:

Every instance pool creates a separate virtual cluster underneath. Instances within a pool and single instances
deployed in the same subnet do not share compute resources allocated to SQL Server processes and gateway
components, which ensures performance predictability.

Resource limitations
There are several resource limitations regarding instance pools and instances inside pools:
Instance pools are available only on Gen5 hardware.
Managed instances within a pool have dedicated CPU and RAM, so the aggregated number of vCores across
all instances must be less than or equal to the number of vCores allocated to the pool.
All instance-level limits apply to instances created within a pool.
In addition to instance-level limits, there are also two limits imposed at the instance pool level:
Total storage size per pool (8 TB).
Total number of user databases per pool. This limit depends on the pool vCores value:
8 vCores pool supports up to 200 databases,
16 vCores pool supports up to 400 databases,
24 and larger vCores pool supports up to 500 databases.
Azure AD authentication can be used after creating or setting a managed instance with the -AssignIdentity
flag. For more information, see New-AzSqlInstance and Set-AzSqlInstance. Users can then set an Azure AD
admin for the instance by following Provision Azure AD admin (SQL Managed Instance).
Total storage allocation and number of databases across all instances must be lower than or equal to the limits
exposed by instance pools.
Instance pools support 8, 16, 24, 32, 40, 64, and 80 vCores.
Managed instances inside pools support 2, 4, 8, 16, 24, 32, 40, 64, and 80 vCores.
Managed instances inside pools support storage sizes between 32 GB and 8 TB, except:
2 vCore instances support sizes between 32 GB and 640 GB,
4 vCore instances support sizes between 32 GB and 2 TB.
Managed instances inside pools have limit of up to 100 user databases per instance, except 2 vCore instances
that support up to 50 user databases per instance.
The service tier property is associated with the instance pool resource, so all instances in a pool must be the
same service tier as the service tier of the pool. At this time, only the General Purpose service tier is available
(see the following section on limitations in the current preview).
Public preview limitations
The public preview has the following limitations:
Currently, only the General Purpose service tier is available.
Instance pools cannot be scaled during the public preview, so careful capacity planning before deployment is
important.
Azure portal support for instance pool creation and configuration is not yet available. All operations on
instance pools are supported through PowerShell only. Initial instance deployment in a pre-created pool is
also supported through PowerShell only. Once deployed into a pool, managed instances can be updated
using the Azure portal.
Managed instances created outside of the pool cannot be moved into an existing pool, and instances created
inside a pool cannot be moved outside as a single instance or to another pool.
Reserve capacity instance pricing is not available.
Failover groups are not supported for instances in the pool.

SQL features supported


Managed instances created in pools support the same compatibility levels and features supported in single
managed instances.
Every managed instance deployed in a pool has a separate instance of SQL Agent.
Optional features or features that require you to choose specific values (such as instance-level collation, time
zone, public endpoint for data traffic, failover groups) are configured at the instance level and can be different
for each instance in a pool.

Performance considerations
Although managed instances within pools do have dedicated vCore and RAM, they share local disk (for tempdb
usage) and network resources. It's not likely, but it is possible to experience the noisy neighbor effect if multiple
instances in the pool have high resource consumption at the same time. If you observe this behavior, consider
deploying these instances to a bigger pool or as single instances.

Security considerations
Because instances deployed in a pool share the same virtual machine, you may want to consider disabling
features that introduce higher security risks, or to firmly control access permissions to these features. For
example, CLR integration, native backup and restore, database email, etc.

Instance pool support requests


Create and manage support requests for instance pools in the Azure portal.
If you are experiencing issues related to instance pool deployment (creation or deletion), make sure that you
specify Instance Pools in the Problem subtype field.

If you are experiencing issues related to a single managed instance or database within a pool, you should create
a regular support ticket for Azure SQL Managed Instance.
To create larger SQL Managed Instance deployments (with or without instance pools), you may need to obtain a
larger regional quota. For more information, see Request quota increases for Azure SQL Database. The
deployment logic for instance pools compares total vCore consumption at the pool level against your quota to
determine whether you are allowed to create new resources without further increasing your quota.

Instance pool billing


Instance pools allow scaling compute and storage independently. Customers pay for compute associated with
the pool resource measured in vCores, and storage associated with every instance measured in gigabytes (the
first 32 GB are free of charge for every instance).
vCore price for a pool is charged regardless of how many instances are deployed in that pool.
For the compute price (measured in vCores), two pricing options are available:
1. License included: Price of SQL Server licenses is included. This is for the customers who choose not to apply
existing SQL Server licenses with Software Assurance.
2. Azure Hybrid Benefit: A reduced price that includes Azure Hybrid Benefit for SQL Server. Customers can opt
into this price by using their existing SQL Server licenses with Software Assurance. For eligibility and other
details, see Azure Hybrid Benefit.
Setting different pricing options is not possible for individual instances in a pool. All instances in the parent pool
must be either at License Included price or Azure Hybrid Benefit price. The license model for the pool can be
altered after the pool is created.
IMPORTANT
If you specify a license model for the instance that is different than in the pool, the pool price is used and the instance
level value is ignored.

If you create instance pools on subscriptions eligible for dev-test benefit, you automatically receive discounted
rates of up to 55 percent on Azure SQL Managed Instance.
For full details on instance pool pricing, refer to the instance pools section on the SQL Managed Instance pricing
page.

Next steps
To get started with instance pools, see SQL Managed Instance pools how-to guide.
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Azure SQL common features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
For advanced monitoring of SQL Managed Instance database performance with built-in troubleshooting
intelligence, see Monitor Azure SQL Managed Instance using Azure SQL Analytics.
For pricing information, see SQL Managed Instance pricing.
Data virtualization with Azure SQL Managed
Instance (Preview)
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Data virtualization with Azure SQL Managed Instance allows you to execute Transact-SQL (T-SQL) queries
against data from files stored in Azure Data Lake Storage Gen2 or Azure Blob Storage, and combine it with
locally stored relational data using joins. This way you can transparently access external data while keeping it in
its original format and location - also known as data virtualization.
Data virtualization is currently in preview for Azure SQL Managed Instance.

Overview
Data virtualization provides two ways of querying external files stored in Azure Data Lake Storage or Azure Blob
Storage, intended for different scenarios:
OPENROWSET syntax – optimized for ad-hoc querying of files. Typically used to quickly explore the content
and the structure of a new set of files.
External tables – optimized for repetitive querying of files using identical syntax as if data were stored locally
in the database. External tables require several preparation steps compared to the OPENROWSET syntax, but
allow for more control over data access. External tables are typically used for analytical workloads and
reporting.
Parquet and delimited text (CSV) file formats are directly supported. The JSON file format is indirectly supported
by specifying the CSV file format where queries return every document as a separate row. It's possible to parse
rows further using JSON_VALUE and OPENJSON .

Getting started
Use Transact-SQL (T-SQL) to explicitly enable the data virtualization feature before using it.
To enable data virtualization capabilities, run the following command:

exec sp_configure 'polybase_enabled', 1;


go
reconfigure;
go

Provide the location of the file(s) you intend to query using the location prefix corresponding to the type of
external source and endpoint/protocol, such as the following examples:

--Blob Storage endpoint


abs://<container>@<storage_account>.blob.core.windows.net/<path>/<file_name>.parquet

--Data Lake endpoint


adls://<container>@<storage_account>.dfs.core.windows.net/<path>/<file_name>.parquet
IMPORTANT
Using the generic https:// prefix is discouraged and will be disabled in the future. Be sure to use endpoint-specific
prefixes to avoid interruptions.

If you're new to data virtualization and want to quickly test functionality, start by querying publicly available
data sets available in Azure Open Datasets, like the Bing COVID-19 dataset allowing anonymous access.
Use the following endpoints to query the Bing COVID-19 data sets:
Parquet:
abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-
19_data/latest/bing_covid-19_data.parquet
CSV:
abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-
19_data/latest/bing_covid-19_data.csv

Once your public data set queries are executing successfully, consider switching to private data sets that require
configuring specific rights and/or firewall rules.
To access a private location, use a Shared Access Signature (SAS) with proper access permissions and validity
period to authenticate to the storage account. Create a database-scoped credential using the SAS key, rather
than providing it directly in each query. The credential is then used as a parameter to access the external data
source.

External data source


External data sources are abstractions intended to make it easier to manage file locations across multiple
queries, and to reference authentication parameters that are encapsulated within database-scoped credentials.
When accessing a public location, add the file location when querying the external data source:

-- Don't forget to enable data virtualization capabilities first, if this is the first time you are running
this type of query

CREATE EXTERNAL DATA SOURCE DemoPublicExternalDataSource


WITH (
LOCATION = 'abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest'
-- LOCATION = 'abs://<container>@<storage_account>.blob.core.windows.net/<path>'
)

When accessing a private location, include the file path and credential when querying the external data source:
--Don't forget to enable data virtualization capabilities first, if this is the first time you are running
this type of query

-- Step0 (optional): Create master key if it doesn't exist in the database:


-- CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<Put Some Very Strong Password Here>'
-- GO

--Step1: Create database-scoped credential (requires database master key to exist):


CREATE DATABASE SCOPED CREDENTIAL [DemoCredential]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = '<your SAS key without leading "?" mark>';
GO

--Step2: Create external data source pointing to the file path, and referencing database-scoped credential:
CREATE EXTERNAL DATA SOURCE DemoPrivateExternalDataSource
WITH (
LOCATION = 'abs://<container>@<storage_account>.blob.core.windows.net/<path>',
CREDENTIAL = [DemoCredential]
)

Query data sources using OPENROWSET


The OPENROWSET syntax enables instant ad-hoc querying while only creating the minimal number of database
objects necessary. OPENROWSET only requires creating the external data source (and possibly the credential) as
opposed to the external table approach which requires an external file format and the external table itself.
The DATA_SOURCE parameter value is automatically prepended to the BULK parameter to form the full path to the
file.
When using OPENROWSET provide the format of the file, such as the following example, which queries a single file:

SELECT TOP 10 *
FROM OPENROWSET(
BULK 'bing_covid-19_data.parquet',
DATA_SOURCE = 'DemoPublicExternalDataSource',
FORMAT = 'parquet'
) AS filerows

Querying multiple files and folders


The OPENROWSET command also allows querying multiple files or folders by using wildcards in the BULK path.
The following example uses the NYC yellow taxi trip records open data set:

--Query all files with .parquet extension in folders matching name pattern:
SELECT TOP 10 *
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',--You need to create the data source first
FORMAT = 'parquet'
) AS filerows

When querying multiple files or folders, all files accessed with the single OPENROWSET must have the same
structure (such as the same number of columns and data types). Folders can't be traversed recursively.
Schema inference
Automatic schema inference helps you quickly write queries and explore data when you don't know file
schemas. Schema inference only works with parquet format files.
While convenient, the cost is that inferred data types may be larger than the actual data types. This can lead to
poor query performance since there may not be enough information in the source files to ensure the
appropriate data type is used. For example, parquet files don't contain metadata about maximum character
column length, so the instance infers it as varchar(8000).
Use the sp_describe_first_results_set stored procedure to check the resulting data types of your query, such as
the following example:

EXEC sp_describe_first_result_set N'


SELECT
vendor_id, pickup_datetime, passenger_count
FROM
OPENROWSET(
BULK ''taxi/*/*/*'',
DATA_SOURCE = ''NYCTaxiDemoDataSource'',
FORMAT=''parquet''
) AS nyc';

Once you know the data types, you can then specify them using the WITH clause to improve performance:

SELECT TOP 100


vendor_id, pickup_datetime, passenger_count
FROM
OPENROWSET(
BULK 'taxi/*/*/*',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT='PARQUET'
)
WITH (
vendor_id varchar(4), -- we're using length of 4 instead of the inferred 8000
pickup_datetime datetime2,
passenger_count int
) AS nyc;

Since the schema of CSV files can't be automatically determined, explicitly specify columns using the WITH
clause:

SELECT TOP 10 *
FROM OPENROWSET(
BULK 'population/population.csv',
DATA_SOURCE = 'PopulationDemoDataSourceCSV',
FORMAT = 'CSV')
WITH (
[country_code] VARCHAR (5) COLLATE Latin1_General_BIN2,
[country_name] VARCHAR (100) COLLATE Latin1_General_BIN2,
[year] smallint,
[population] bigint
) AS filerows

File metadata functions


When querying multiple files or folders, you can use Filepath and Filename functions to read file metadata
and get part of the path or full path and name of the file that the row in the result set originates from:
--Query all files and project file path and file name information for each row:
SELECT TOP 10 filerows.filepath(1) as [Year_Folder], filerows.filepath(2) as [Month_Folder],
filerows.filename() as [File_name], filerows.filepath() as [Full_Path], *
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet') AS filerows
--List all paths:
SELECT DISTINCT filerows.filepath(1) as [Year_Folder], filerows.filepath(2) as [Month_Folder]
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet') AS filerows

When called without a parameter, the Filepath function returns the file path that the row originates from.
When DATA_SOURCE is used in OPENROWSET , it returns the path relative to the DATA_SOURCE , otherwise it returns
full file path.
When called with a parameter, it returns part of the path that matches the wildcard on the position specified in
the parameter. For example, parameter value 1 would return part of the path that matches the first wildcard.
The Filepath function can also be used for filtering and aggregating rows:

SELECT
r.filepath() AS filepath
,r.filepath(1) AS [year]
,r.filepath(2) AS [month]
,COUNT_BIG(*) AS [rows]
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet'
) AS r
WHERE
r.filepath(1) IN ('2017')
AND r.filepath(2) IN ('10', '11', '12')
GROUP BY
r.filepath()
,r.filepath(1)
,r.filepath(2)
ORDER BY
filepath;

Creating view on top of OPENROWSET


You can create and use views to wrap OPENROWSET queries so that you can easily reuse the underlying query:

CREATE VIEW TaxiRides AS


SELECT *
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet'
) AS filerows

It's also convenient to add columns with the file location data to a view using the Filepath function for easier
and more performant filtering. Using views can reduce the number of files and the amount of data the query on
top of the view needs to read and process when filtered by any of those columns:
CREATE VIEW TaxiRides AS
SELECT *
,filerows.filepath(1) AS [year]
,filerows.filepath(2) AS [month]
FROM OPENROWSET(
BULK 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = 'NYCTaxiDemoDataSource',
FORMAT = 'parquet'
) AS filerows

Views also enable reporting and analytic tools like Power BI to consume results of OPENROWSET .

External tables
External tables encapsulate access to files making the querying experience almost identical to querying local
relational data stored in user tables. Creating an external table requires the external data source and external file
format objects to exist:

--Create external file format


CREATE EXTERNAL FILE FORMAT DemoFileFormat
WITH (
FORMAT_TYPE=PARQUET
)
GO

--Create external table:


CREATE EXTERNAL TABLE tbl_TaxiRides(
vendor_id VARCHAR(100) COLLATE Latin1_General_BIN2,
pickup_datetime DATETIME2,
dropoff_datetime DATETIME2,
passenger_count INT,
trip_distance FLOAT,
fare_amount FLOAT,
extra FLOAT,
mta_tax FLOAT,
tip_amount FLOAT,
tolls_amount FLOAT,
improvement_surcharge FLOAT,
total_amount FLOAT
)
WITH (
LOCATION = 'taxi/year=*/month=*/*.parquet',
DATA_SOURCE = DemoDataSource,
FILE_FORMAT = DemoFileFormat
);
GO

Once the external table is created, you can query it just like any other table:

SELECT TOP 10 *
FROM tbl_TaxiRides

Just like OPENROWSET , external tables allow querying multiple files and folders by using wildcards. Schema
inference and filepath/filename functions aren't supported with external tables.

Performance considerations
There's no hard limit in terms of number of files or amount of data that can be queried, but query performance
depends on the amount of data, data format, and complexity of queries and joins.
Collecting statistics on your external data is one of the most important things you can do for query optimization.
The more the instance knows about your data, the faster it can execute queries. The SQL engine query optimizer
is a cost-based optimizer. It compares the cost of various query plans, and then chooses the plan with the lowest
cost. In most cases, it chooses the plan that will execute the fastest.
Automatic creation of statistics
Managed Instance analyzes incoming user queries for missing statistics. If statistics are missing, the query
optimizer automatically creates statistics on individual columns in the query predicate or join condition to
improve cardinality estimates for the query plan. Automatic creation of statistics is done synchronously so you
may incur slightly degraded query performance if your columns are missing statistics. The time to create
statistics for a single column depends on the size of the files targeted.
OPENROWSET manual statistics
Single-column statistics for the OPENROWSET path can be created using the sp_create_openrowset_statistics
stored procedure, by passing the select query with a single column as a parameter:

EXEC sys.sp_create_openrowset_statistics N'


SELECT pickup_datetime
FROM OPENROWSET(
BULK ''abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-
19_data/latest/*.parquet'',
FORMAT = ''parquet'') AS filerows
'

By default, the instance uses 100% of the data provided in the dataset to create statistics. You can optionally
specify the sample size as a percentage using the TABLESAMPLE options. To create single-column statistics for
multiple columns, execute the stored procedure for each of the columns. You can't create multi-column statistics
for the OPENROWSET path.
To update existing statistics, drop them first using the sp_drop_openrowset_statistics stored procedure, and
then recreate them using the sp_create_openrowset_statistics :

EXEC sys.sp_drop_openrowset_statistics N'


SELECT pickup_datetime
FROM OPENROWSET(
BULK ''abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-
19_data/latest/*.parquet'',
FORMAT = ''parquet'') AS filerows
'

External table manual statistics


The syntax for creating statistics on external tables resembles the one used for ordinary user tables. To create
statistics on a column, provide a name for the statistics object and the name of the column:

CREATE STATISTICS sVendor


ON tbl_TaxiRides (vendor_id)
WITH FULLSCAN, NORECOMPUTE

The WITH options are mandatory, and for the sample size, the allowed options are FULLSCAN and SAMPLE n
percent. To create single-column statistics for multiple columns, execute the stored procedure for each of the
columns. Multi-column statistics are not supported.

Troubleshooting
Issues with query execution are typically caused by managed instance not being able to access file location. The
related error messages may report insufficient access rights, non-existing location or file path, file being used by
another process, or that directory cannot be listed. In most cases this indicates that access to files is blocked by
network traffic control policies or due to lack of access rights. This is what should be checked:
Wrong or mistyped location path.
SAS key validity: it could be expired i.e. out of its validity period, containing a typo, starting with a question
mark.
SAS key persmissions allowed: Read at minimum, and List if wildcards are used
Blocked inbound traffic on the storage account. Check Managing virtual network rules for Azure Storage for
more details and make sure that access from managed instance VNet is allowed.
Outbound traffic blocked on the managed instance using storage endpoint policy. Allow outbound traffic to
the storage account.

Next steps
To learn more about syntax options available with OPENROWSET, see OPENROWSET T-SQL.
For more information about creating external table in SQL Managed Instance, see CREATE EXTERNAL TABLE.
To learn more about creating external file format, see CREATE EXTERNAL FILE FORMAT
Overview of Azure SQL Managed Instance
management operations
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance provides management operations that you can use to automatically deploy new
managed instances, update instance properties, and delete instances when no longer needed.

What are management operations?


All management operations can be categorized as follows:
Instance deployment (new instance creation)
Instance update (changing instance properties, such as vCores or reserved storage)
Instance deletion
To support deployments within Azure virtual networks and provide isolation and security for customers, SQL
Managed Instance relies on virtual clusters. The virtual cluster represents a dedicated set of isolated virtual
machines deployed inside the customer's virtual network subnet. Essentially, every managed instance deployed
to an empty subnet results in a new virtual cluster buildout.
Subsequent management operations on managed instances may impact the underlying virtual cluster. Changes
that impact the underlying virtual cluster may affect the duration of management operations, as deploying
additional virtual machines comes with an overhead that you need to consider when you plan new deployments
or updates to existing managed instances.

Duration
The duration of operations on the virtual cluster can vary, but typically have the longest duration.
The following table lists the long running steps that can be triggered as part of the create, update, or delete
operation. Table also lists the durations that you can typically expect, based on existing service telemetry data:

ST EP DESC RIP T IO N EST IM AT ED DURAT IO N

Vir tual cluster creation Creation is a synchronous step in 90% of operations finish in 4
instance management operations. hours

Vir tual cluster resizing Expansion is a synchronous step, while 90% of cluster expansions finish
(expansion or shrinking) shrinking is performed asynchronously in less than 2.5 hours
(without impact on the duration of
instance management operations).
ST EP DESC RIP T IO N EST IM AT ED DURAT IO N

Vir tual cluster deletion Virtual cluster deletion can be 90% of cluster deletions finish in
synchronous and asynchronous. 1.5 hours
Asynchronous deletion is performed in
the background and it is triggered in
case of multiple virtual clusters inside
the same subnet, when last instance in
the non-last cluster in the subnet is
deleted. Synchronous deletion of the
virtual cluster is triggered as part of
the very last instance deletion in the
subnet.

Seeding database files 1 A synchronous step, triggered during 90% of these operations execute
compute (vCores), or storage scaling in at 220 GB/hour or higher
the Business Critical service tier as well
as in changing the service tier from
General Purpose to Business Critical
(or vice versa). Duration of this
operation is proportional to the total
database size as well as current
database activity (number of active
transactions). Database activity when
updating an instance can introduce
significant variance to the total
duration.

1 When scaling compute (vCores) or storage in Business Critical service tier, or switching service tier from
General Purpose to Business Critical, seeding also includes Always On availability group seeding.

IMPORTANT
Scaling storage up or down in the General Purpose service tier consists of updating meta data and propagating response
for submitted request. It is a fast operation that completes in up to 5 minutes, without a downtime and failover.

Management operations long running segments


The following tables summarize operations and typical overall durations, based on the category of the
operation:
Categor y: Deployment

O P ERAT IO N LO N G- RUN N IN G SEGM EN T EST IM AT ED DURAT IO N

First instance in an empty subnet Virtual cluster creation 90% of operations finish in 4 hours.

First instance of another hardware or Virtual cluster creation1 90% of operations finish in 4 hours.
maintenance window in a non-empty
subnet (for example, first Premium-
series instance in a subnet with
Standard-series instances)

Subsequent instance creation within Virtual cluster resizing 90% of operations finish in 2.5 hours.
the non-empty subnet (2nd, 3rd, etc.
instance)

1 A separate virtual cluster is created for each hardware configuration and for each maintenance window
configuration.
Categor y: Update

O P ERAT IO N LO N G- RUN N IN G SEGM EN T EST IM AT ED DURAT IO N

Instance property change (admin N/A Up to 1 minute.


password, Azure AD login, Azure
Hybrid Benefit flag)

Instance storage scaling up/down No long-running segment 99% of operations finish in 5 minutes.
(General Purpose)

Instance storage scaling up/down - Virtual cluster resizing 90% of operations finish in 2.5 hours +
(Business Critical) - Always On availability group seeding time to seed all databases (220
GB/hour).

Instance compute (vCores) scaling up - Virtual cluster resizing 90% of operations finish in 2.5 hours.
and down (General Purpose)

Instance compute (vCores) scaling up - Virtual cluster resizing 90% of operations finish in 2.5 hours +
and down (Business Critical) - Always On availability group seeding time to seed all databases (220
GB/hour).

Instance service tier change (General - Virtual cluster resizing 90% of operations finish in 2.5 hours +
Purpose to Business Critical and vice - Always On availability group seeding time to seed all databases (220
versa) GB/hour).

Instance hardware or maintenance - Virtual cluster creation or resizing1 90% of operations finish in 4 hours
window change (General Purpose) (creation) or 2.5 hours (resizing) .

Instance hardware or maintenance - Virtual cluster creation or resizing1 90% of operations finish in 4 hours
window change (Business Critical) - Always On availability group seeding (creation) or 2.5 hours (resizing) + time
to seed all databases (220 GB/hour).

1 Managed instance must be placed in a virtual cluster with the corresponding hardware and maintenance
window. If there is no such virtual cluster in the subnet, a new one must be created first to accommodate the
instance.
Categor y: Delete

O P ERAT IO N LO N G- RUN N IN G SEGM EN T EST IM AT ED DURAT IO N

Non-last instance deletion Log tail backup for all databases 90% of operations finish in up to 1
minute.1

Last instance deletion - Log tail backup for all databases 90% of operations finish in up to 1.5
- Virtual cluster deletion hours.2

1 In case of multiple virtual clusters in the subnet, if the last instance in the virtual cluster is deleted, this
operation will immediately trigger asynchronous deletion of the virtual cluster.
2 Deletion of last instance in the subnet immediately triggers synchronous deletion of the virtual cluster.

IMPORTANT
As soon as delete operation is triggered, billing for SQL Managed Instance is disabled. Duration of the delete operation
will not impact the billing.
Instance availability
SQL Managed Instance is available during update operations , except a short downtime caused by the
failover that happens at the end of the update. It typically lasts up to 10 seconds even in case of interrupted
long-running transactions, thanks to accelerated database recovery.

NOTE
Scaling General Purpose managed instance storage will not cause a failover at the end of update.

SQL Managed Instance is not available to client applications during deployment and deletion operations.

IMPORTANT
It's not recommended to scale compute or storage of Azure SQL Managed Instance or to change the service tier at the
same time as long-running transactions (data import, data processing jobs, index rebuild, etc.). The failover of the
database at the end of the operation cancels all ongoing transactions.

Management operations steps


Management operations consist of multiple steps. With Operations API introduced these steps are exposed for
subset of operations (deployment and update). Deployment operation consists of three steps while update
operation is performed in six steps. For details on operations duration, see the management operations duration
section. Steps are listed by order of execution.
Managed instance deployment steps
ST EP N A M E ST EP DESC RIP T IO N

Request validation Submitted parameters are validated. In case of


misconfiguration operation will fail with an error.

Virtual cluster resizing / creation Depending on the state of subnet, virtual cluster goes into
creation or resizing.

New SQL instance startup SQL process is started on deployed virtual cluster.

Managed instance update steps


ST EP N A M E ST EP DESC RIP T IO N

Request validation Submitted parameters are validated. In case of


misconfiguration operation will fail with an error.

Virtual cluster resizing / creation Depending on the state of subnet, virtual cluster goes into
creation or resizing.

New SQL instance startup SQL process is started on deployed virtual cluster.

Seeding database files / attaching database files Depending on the type of the update operation, either
database seeding or attaching database files is performed.
ST EP N A M E ST EP DESC RIP T IO N

Preparing failover and failover After data has been seeded or database files reattached,
system is being prepared for the failover. When everything is
set, failover is performed with a shor t downtime .

Old SQL instance cleanup Removing old SQL process from the virtual cluster

Managed instance delete steps


ST EP N A M E ST EP DESC RIP T IO N

Request validation Submitted parameters are validated. In case of


misconfiguration operation will fail with an error.

SQL instance cleanup Removing SQL process from the virtual cluster

Virtual cluster deletion Depending if the instance being deleted is last in the subnet,
virtual cluster is synchronously deleted as last step.

NOTE
As a result of scaling instances, underlying virtual cluster will go through process of releasing unused capacity and
possible capacity defragmentation, which could impact instances that did not participate in creation / scaling operations.

Management operations cross-impact


Management operations on a managed instance can affect other management operations of the instances
placed inside the same virtual cluster:
Long-running restore operations in a virtual cluster will put other instance creation or scaling
operations in the same subnet on hold.
Example: If there is a long-running restore operation and there is a create or scale request in the same
subnet, this request will take longer to complete as it waits for the restore operation to complete before it
continues.
A subsequent instance creation or scaling operation is put on hold by a previously initiated instance
creation or instance scale that initiated a resize of the virtual cluster.
Example: If there are multiple create and/or scale requests in the same subnet under the same virtual
cluster, and one of them initiates a virtual cluster resize, all requests that were submitted 5+ minutes after
the initial operation request will last longer than expected, as these requests will have to wait for the
resize to complete before resuming.
Create/scale operations submitted in a 5-minute window will be batched and executed in parallel.
Example: Only one virtual cluster resize will be performed for all operations submitted in a 5-minute
window (measuring from the moment of executing the first operation request). If another request is
submitted more than 5 minutes after the first one is submitted, it will wait for the virtual cluster resize to
complete before execution starts.
IMPORTANT
Management operations that are put on hold because of another operation that is in progress will automatically be
resumed once conditions to proceed are met. No user action is necessary to resume the temporarily paused management
operations.

Monitoring management operations


To learn how to monitor management operation progress and status, see Monitoring management operations.

Canceling management operations


To learn how to cancel management operation, see Canceling management operations.

Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Monitoring Azure SQL Managed Instance
management operations
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance provides monitoring of management operations that you use to deploy new
managed instances, update instance properties, or delete instances when no longer needed.

Overview
All management operations can be categorized as follows:
Instance deployment (new instance creation).
Instance update (changing instance properties, such as vCores or reserved storage).
Instance deletion.
Most management operations are long running operations. Therefore there is a need to monitor the status or
follow the progress of operation steps.
There are several ways to monitor managed instance management operations:
Resource group deployments
Activity log
Managed instance operations API
The following table compares management operation monitoring options:

SUP P O RT S
O P T IO N RET EN T IO N C A N C EL C REAT E UP DAT E DEL ET E C A N C EL ST EP S

Resource Infinite1 No2 Visible Visible Not visible Visible Not visible
group
deploymen
ts

Activity log 90 days No Visible Visible Visible Visible Not visible

Managed 24 hours Yes Visible Visible Visible Visible Visible


instance
operations
API

1 The deployment history for a resource group is limited to 800 deployments.


2 Resource group deployments support cancel operation. However, due to cancel logic, only an operation

scheduled for deployment after the cancel action is performed will be canceled. Ongoing deployment is not
canceled when the resource group deployment is canceled. Since managed instance deployment consists of one
long running step (from the Azure Resource Manger perspective), canceling resource group deployment will not
cancel managed instance deployment and the operation will complete.

Managed instance operations API


Management operations APIs are specially designed to monitor operations. Monitoring managed instance
operations can provide insights on operation parameters and operation steps, as well as cancel specific
operations. Besides operation details and cancel command, this API can be used in automation scripts with
multi-resource deployments - based on the progress step, you can kick off some dependent resource
deployment.
These are the APIs:

C OMMAND DESC RIP T IO N

Managed Instance Operations - Get Gets a management operation on a managed instance.

Managed Instance Operations - Cancel Cancels the asynchronous operation on the managed
instance.

Managed Instance Operations - List By Managed Instance Gets a list of operations performed on the managed
instance.

NOTE
Use API version 2020-02-02 to see the managed instance create operation in the list of operations. This is the default
version used in the Azure portal and the latest PowerShell and Azure CLI packages.

Monitor operations
Portal
PowerShell
Azure CLI

In the Azure portal, use the managed instance Over view page to monitor managed instance operations.
For example, the Create operation is visible at the start of the creation process on the Over view page:

Select Ongoing operation to open the Ongoing operation page and view Create or Update operations.
You can also Cancel operations from this page as well.
NOTE
Create operations submitted through Azure portal, PowerShell, Azure CLI or other tooling using REST API version 2020-
02-02 can be canceled. REST API versions older than 2020-02-02 used to submit a create operation will start the instance
deployment, but the deployment won't be listed in the Operations API and can't be cancelled.

Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Canceling Azure SQL Managed Instance
management operations
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance provides the ability to cancel some management operations, such as when you
deploy a new managed instance or update instance properties.

Overview
All management operations can be categorized as follows:
Instance deployment (new instance creation).
Instance update (changing instance properties, such as vCores or reserved storage).
Instance deletion.
You can monitor progress and status of management operations and cancel some of them if necessary.
The following table summarizes management operations, whether or not you can cancel them, and their typical
overall duration:

EST IM AT ED C A N C EL
C AT EGO RY O P ERAT IO N C A N C EL A B L E DURAT IO N

Deployment Instance creation Yes 90% of operations finish in


5 minutes.

Update Instance storage scaling No


up/down (General Purpose)

Update Instance storage scaling Yes 90% of operations finish in


up/down (Business Critical) 5 minutes.

Update Instance compute (vCores) Yes 90% of operations finish in


scaling up and down 5 minutes.
(General Purpose)

Update Instance compute (vCores) Yes 90% of operations finish in


scaling up and down 5 minutes.
(Business Critical)

Update Instance service tier change Yes 90% of operations finish in


(General Purpose to 5 minutes.
Business Critical and vice
versa)

Delete Instance deletion No

Delete Virtual cluster deletion (as No


user-initiated operation)
Cancel management operation
Portal
PowerShell
Azure CLI

To cancel management operations using the Azure portal, follow these steps:
1. Go to the Azure portal
2. Go to the Over view blade of your SQL Managed Instance.
3. Select the Notification box next to the ongoing operation to open the Ongoing Operation page.

4. Select Cancel the operation at the bottom of the page.


5. Confirm that you want to cancel the operation.
If the cancel request succeeds, the management operation is canceled and results in a failure. You will get a
notification that the cancellation succeeds or fails.

If the cancel request fails or the cancel button is not active, it means that the management operation has entered
non-cancelable state and that will finish shortly. The management operation will continue its execution until it is
completed.

Canceled deployment request


With API version 2020-02-02, as soon as the instance creation request is accepted, the instance starts to exist as
a resource, no matter the progress of the deployment process (managed instance status is Provisioning ). If you
cancel the instance deployment request (new instance creation), the managed instance will go from the
Provisioning state to FailedToCreate .
Instances that have failed to create are still present as a resource and:
Are not charged
Do not count towards resource limits (subnet or vCore quota)
NOTE
To minimize noise in the the list of resources or managed instances, delete instances that have failed to deploy or
instances with cancelled deployments.

Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see Common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Managed API reference for Azure SQL Managed
Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


You can create and configure managed instances of Azure SQL Managed Instance using the Azure portal,
PowerShell, Azure CLI, REST API, and Transact-SQL. In this article, you can find an overview of the functions and
the API that you can use to create and configure managed instances.

Azure portal: Create a managed instance


For a quickstart showing you how to create a managed instance, see Quickstart: Create a managed instance.

PowerShell: Create and configure managed instances


NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Database, but all future development is
for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az module and in the
AzureRM modules are substantially identical.

To create and manage managed instances with Azure PowerShell, use the following PowerShell cmdlets. If you
need to install or upgrade PowerShell, see Install the Azure PowerShell module.

TIP
For PowerShell example scripts, see Quickstart script: Create a managed instance using a PowerShell library.

C M DL ET DESC RIP T IO N

New-AzSqlInstance Creates a managed instance.

Get-AzSqlInstance Returns information about a managed instance.

Set-AzSqlInstance Sets properties for a managed instance.

Remove-AzSqlInstance Removes a managed instance.

Get-AzSqlInstanceOperation Gets a list of management operations performed on the


managed instance or specific operation.
C M DL ET DESC RIP T IO N

Stop-AzSqlInstanceOperation Cancels the specific management operation performed on


the managed instance.

New-AzSqlInstanceDatabase Creates a SQL Managed Instance database.

Get-AzSqlInstanceDatabase Returns information about a SQL Managed Instance


database.

Remove-AzSqlInstanceDatabase Removes a SQL Managed Instance database.

Restore-AzSqlInstanceDatabase Restores a SQL Managed Instance database.

Azure CLI: Create and configure managed instances


To create and configure managed instances with Azure CLI, use the following Azure CLI commands for SQL
Managed Instance. Use Azure Cloud Shell to run Azure CLI in your browser, or install it on macOS, Linux, or
Windows.

TIP
For an Azure CLI quickstart, see Working with SQL Managed Instance using Azure CLI.

C M DL ET DESC RIP T IO N

az sql mi create Creates a managed instance.

az sql mi list Lists available managed instances.

az sql mi show Gets the details for a managed instance.

az sql mi update Updates a managed instance.

az sql mi delete Removes a managed instance.

az sql mi op list Gets a list of management operations performed on the


managed instance.

az sql mi op show Gets the specific management operation performed on the


managed instance.

az sql mi op cancel Cancels the specific management operation performed on


the managed instance.

az sql midb create Creates a managed database.

az sql midb list Lists available managed databases.

az sql midb restore Restores a managed database.

az sql midb delete Removes a managed database.


Transact-SQL: Create and configure instance databases
To create and configure instance databases after the managed instance is created, use the following T-SQL
commands. You can issue these commands using the Azure portal, SQL Server Management Studio, Azure Data
Studio, Visual Studio Code, or any other program that can connect to a server and pass Transact-SQL
commands.

TIP
For quickstarts showing you how to configure and connect to a managed instance using SQL Server Management Studio
on Microsoft Windows, see Quickstart: Configure Azure VM to connect to Azure SQL Managed Instance and Quickstart:
Configure a point-to-site connection to Azure SQL Managed Instance from on-premises.

IMPORTANT
You cannot create or delete a managed instance using Transact-SQL.

C OMMAND DESC RIP T IO N

CREATE DATABASE Creates a new instance database in SQL Managed Instance.


You must be connected to the master database to create a
new database.

ALTER DATABASE Modifies an instance database in SQL Managed Instance.

REST API: Create and configure managed instances


To create and configure managed instances, use these REST API requests.

C OMMAND DESC RIP T IO N

Managed Instances - Create Or Update Creates or updates a managed instance.

Managed Instances - Delete Deletes a managed instance.

Managed Instances - Get Gets a managed instance.

Managed Instances - List Returns a list of managed instances in a subscription.

Managed Instances - List By Resource Group Returns a list of managed instances in a resource group.

Managed Instances - Update Updates a managed instance.

Managed Instance Operations - List By Managed Instance Gets a list of management operations performed on the
managed instance.

Managed Instance Operations - Get Gets the specific management operation performed on the
managed instance.

Managed Instance Operations - Cancel Cancels the specific management operation performed on
the managed instance.
Next steps
To learn about migrating a SQL Server database to Azure, see Migrate to Azure SQL Database.
For information about supported features, see Features.
Machine Learning Services in Azure SQL Managed
Instance
7/12/2022 • 2 minutes to read • Edit Online

Machine Learning Services is a feature of Azure SQL Managed Instance that provides in-database machine
learning, supporting both Python and R scripts. The feature includes Microsoft Python and R packages for high-
performance predictive analytics and machine learning. The relational data can be used in scripts through stored
procedures, T-SQL script containing Python or R statements, or Python or R code containing T-SQL.

What is Machine Learning Services?


Machine Learning Services in Azure SQL Managed Instance lets you execute Python and R scripts in-database.
You can use it to prepare and clean data, do feature engineering, and train, evaluate, and deploy machine
learning models within a database. The feature runs your scripts where the data resides and eliminates transfer
of the data across the network to another server.
Use Machine Learning Services with R/Python support in Azure SQL Managed Instance to:
Run R and Python scripts to do data preparation and general purpose data processing - You
can now bring your R/Python scripts to Azure SQL Managed Instance where your data lives, instead of
having to move data out to some other server to run R and Python scripts. You can eliminate the need for
data movement and associated problems related to latency, security, and compliance.
Train machine learning models in database - You can train models using any open source
algorithms. You can easily scale your training to the entire dataset rather than relying on sample datasets
pulled out of the database.
Deploy your models and scripts into production in stored procedures - The scripts and trained
models can be operationalized simply by embedding them in T-SQL stored procedures. Apps connecting
to Azure SQL Managed Instance can benefit from predictions and intelligence in these models by just
calling a stored procedure. You can also use the native T-SQL PREDICT function to operationalize models
for fast scoring in highly concurrent real-time scoring scenarios.
Base distributions of Python and R are included in Machine Learning Services. You can install and use open-
source packages and frameworks, such as PyTorch, TensorFlow, and scikit-learn, in addition to the Microsoft
packages revoscalepy and microsoftml for Python, and RevoScaleR, MicrosoftML, olapR, and sqlrutils for R.

How to enable Machine Learning Services


You can enable Machine Learning Services in Azure SQL Managed Instance by enabling extensibility with the
following SQL commands (SQL Managed Instance will restart and be unavailable for a few seconds):

sp_configure 'external scripts enabled', 1;


RECONFIGURE WITH OVERRIDE;

For details on how this command affects SQL Managed Instance resources, see Resource Governance.
Enable Machine Learning Services in a failover group
In a failover group, system databases are not replicated to the secondary instance (see Limitations of failover
groups for more information).
If the SQL Managed Instance you're using is part of a failover group, do the following:
Run the sp_configure and RECONFIGURE commands on each instance of the failover group to enable
Machine Learning Services.
Install the R/Python libraries on a user database rather than the master database.

Next steps
See the key differences from SQL Server Machine Learning Services.
To learn how to use Python in Machine Learning Services, see Run Python scripts.
To learn how to use R in Machine Learning Services, see Run R scripts.
For more information about machine learning on other SQL platforms, see the SQL machine learning
documentation.
Key differences between Machine Learning Services
in Azure SQL Managed Instance and SQL Server
7/12/2022 • 2 minutes to read • Edit Online

This article describes the few, key differences in functionality between Machine Learning Services in Azure SQL
Managed Instance and SQL Server Machine Learning Services.

Language support
Machine Learning Services in both SQL Managed Instance and SQL Server support the Python and R
extensibility framework. The key differences in SQL Managed Instance are:
Only Python and R are supported. External languages such as Java cannot be added.
The initial versions of Python and R are different:

P L AT F O RM P Y T H O N RUN T IM E VERSIO N R RUN T IM E VERSIO N S

Azure SQL Managed Instance 3.7.2 3.5.2

SQL Server 2019 3.7.1 3.5.2

SQL Server 2017 3.5.2 and 3.7.2 (CU22 and later) 3.3.3 and 3.5.2 (CU22 and later)

SQL Server 2016 Not available 3.2.2 and 3.5.2 (SP2 CU14 and later)

Python and R Packages


There is no support in SQL Managed Instance for packages that depend on external runtimes (like Java) or need
access to OS APIs for installation or usage.
For more information about managing Python and R packages, see:
Get Python package information
Get R package information

Resource governance
In SQL Managed Instance, it's not possible to limit R resources through Resource Governor, and external
resource pools are not supported.
By default, R resources are set to a maximum of 20% of the available SQL Managed Instance resources when
extensibility is enabled. To change this default percentage, create an Azure support ticket at
https://azure.microsoft.com/support/create-ticket/.
Extensibility is enabled with the following SQL commands (SQL Managed Instance will restart and be
unavailable for a few seconds):

sp_configure 'external scripts enabled', 1;


RECONFIGURE WITH OVERRIDE;
To disable extensibility and restore 100% of memory and CPU resources to SQL Server, use the following
commands:

sp_configure 'external scripts enabled', 0;


RECONFIGURE WITH OVERRIDE;

The total resources available to SQL Managed Instance depend on which service tier you choose. For more
information, see Azure SQL Database purchasing models.
Insufficient memory error
Memory usage depends on how much is used in your R scripts and the number of parallel queries being
executed. If there is insufficient memory available for R, you'll get an error message. Common error messages
are:
Unable to communicate with the runtime for 'R' script for request id: *******. Please check the
requirements of 'R' runtime
'R' script error occurred during execution of 'sp_execute_external_script' with HRESULT 0x80004004. ...an
external script error occurred: "..could not allocate memory (0 Mb) in C function 'R_AllocStringBuffer'"
An external script error occurred: Error: cannot allocate vector of size.

If you receive one of these errors, you can resolve it by scaling your database to a higher service tier.
If you encounter out of memory errors in Azure SQL Managed Instance, review
sys.dm_os_out_of_memory_events.

SQL Managed Instance pools


Machine Learning Services is currently not supported on Azure SQL Managed Instance pools (preview).

Next steps
See the overview, Machine Learning Services in Azure SQL Managed Instance.
To learn how to use Python in Machine Learning Services, see Run Python scripts.
To learn how to use R in Machine Learning Services, see Run R scripts.
Get started with Azure SQL Managed Instance
auditing
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance auditing tracks database events and writes them to an audit log in your Azure
storage account. Auditing also:
Helps you maintain regulatory compliance, understand database activity, and gain insight into discrepancies
and anomalies that could indicate business concerns or suspected security violations.
Enables and facilitates adherence to compliance standards, although it doesn't guarantee compliance. For
more information, see the Microsoft Azure Trust Center where you can find the most current list of SQL
Managed Instance compliance certifications.

IMPORTANT
Auditing for Azure SQL Database, Azure Synapse and Azure SQL Managed Instance is optimized for availability and
performance. During very high activity, or high network load, Azure SQL Database, Azure Synapse and Azure SQL
Managed Instance allow operations to proceed and may not record some audited events.

Set up auditing for your server to Azure Storage


The following section describes the configuration of auditing on your managed instance.
1. Go to the Azure portal.
2. Create an Azure Storage container where audit logs are stored.
a. Navigate to the Azure storage account where you would like to store your audit logs.

IMPORTANT
Use a storage account in the same region as the managed instance to avoid cross-region reads/writes.
If your storage account is behind a Virtual Network or a Firewall, please see Grant access from a virtual
network.
If you change retention period from 0 (unlimited retention) to any other value, please note that
retention will only apply to logs written after retention value was changed (logs written during the
period when retention was set to unlimited are preserved, even after retention is enabled).

b. In the storage account, go to Over view and click Blobs .

c. In the top menu, click + Container to create a new container.


d. Provide a container Name , set Public access level to Private , and then click OK .

IMPORTANT
Customers wishing to configure an immutable log store for their server- or database-level audit events should
follow the instructions provided by Azure Storage. (Please ensure you have selected Allow additional appends
when you configure the immutable blob storage.)

3. After you create the container for the audit logs, there are two ways to configure it as the target for the
audit logs: using T-SQL or using the SQL Server Management Studio (SSMS) UI:
Configure blob storage for audit logs using T-SQL:
a. In the containers list, click the newly created container and then click Container
proper ties .

b. Copy the container URL by clicking the copy icon and save the URL (for example, in
Notepad) for future use. The container URL format should be
https://<StorageName>.blob.core.windows.net/<ContainerName>
c. Generate an Azure Storage SAS token to grant managed instance auditing access rights to
the storage account:
Navigate to the Azure storage account where you created the container in the
previous step.
Click on Shared access signature in the Storage Settings menu.

Configure the SAS as follows:


Allowed ser vices : Blob
Star t date : to avoid time zone-related issues, use yesterday’s date
End date : choose the date on which this SAS token expires

NOTE
Renew the token upon expiry to avoid audit failures.

Click Generate SAS .


The SAS token appears at the bottom. Copy the token by clicking on the copy icon,
and save it (for example, in Notepad) for future use.

IMPORTANT
Remove the question mark (“?”) character from the beginning of the token.

d. Connect to your managed instance via SQL Server Management Studio or any other
supported tool.
e. Execute the following T-SQL statement to create a new credential using the container
URL and SAS token that you created in the previous steps:

CREATE CREDENTIAL [<container_url>]


WITH IDENTITY='SHARED ACCESS SIGNATURE',
SECRET = '<SAS KEY>'
GO

f. Execute the following T-SQL statement to create a new server audit (choose your own audit
name, and use the container URL that you created in the previous steps). If not specified, the
RETENTION_DAYS default is 0 (unlimited retention):

CREATE SERVER AUDIT [<your_audit_name>]


TO URL ( PATH ='<container_url>' , RETENTION_DAYS = integer )
GO

g. Continue by creating a server audit specification or database audit specification.


Configure blob storage for audit logs using SQL Ser ver Management Studio 18:
a. Connect to the managed instance using the SQL Server Management Studio UI.
b. Expand the root note of Object Explorer.
c. Expand the Security node, right-click on the Audits node, and click on New Audit :

d. Make sure URL is selected in Audit destination and click on Browse :

e. (Optional) Sign in to your Azure account:


f. Select a subscription, storage account, and blob container from the dropdowns, or create
your own container by clicking on Create . Once you have finished, click OK :

g. Click OK in the Create Audit dialog.

NOTE
When using SQL Server Management Studio UI to create audit, a credential to the container with
SAS key will be automatically created.

h. After you configure the blob container as target for the audit logs, create and enable a
server audit specification or database audit specification as you would for SQL Server:
Create server audit specification T-SQL guide
Create database audit specification T-SQL guide
4. Enable the server audit that you created in step 3:

ALTER SERVER AUDIT [<your_audit_name>]


WITH (STATE=ON);
GO

For additional information:


Auditing differences between Azure SQL Managed Instance and a database in SQL Server
CREATE SERVER AUDIT
ALTER SERVER AUDIT

Auditing of Microsoft Support operations


Auditing of Microsoft Support operations for SQL Managed Instance allows you to audit Microsoft support
engineers' operations when they need to access your server during a support request. The use of this capability,
along with your auditing, enables more transparency into your workforce and allows for anomaly detection,
trend visualization, and data loss prevention.
To enable auditing of Microsoft Support operations, navigate to Create Audit under Security > Audit in your
SQL Manage Instance, and select Microsoft suppor t operations .

Set up auditing for your server to Event Hubs or Azure Monitor logs
Audit logs from a managed instance can be sent to Azure Event Hubs or Azure Monitor logs. This section
describes how to configure this:
1. Navigate in the Azure portal to the managed instance.
2. Click on Diagnostic settings .
3. Click on Turn on diagnostics . If diagnostics is already enabled, +Add diagnostic setting will show
instead.
4. Select SQLSecurityAuditEvents in the list of logs.
5. Select a destination for the audit events: Event Hubs, Azure Monitor logs, or both. Configure for each
target the required parameters (e.g. Log Analytics workspace).
6. Click Save .

7. Connect to the managed instance using SQL Ser ver Management Studio (SSMS) or any other
supported client.
8. Execute the following T-SQL statement to create a server audit:

CREATE SERVER AUDIT [<your_audit_name>] TO EXTERNAL_MONITOR;


GO

9. Create and enable a server audit specification or database audit specification as you would for SQL
Server:
Create Server audit specification T-SQL guide
Create Database audit specification T-SQL guide
10. Enable the server audit created in step 8:

ALTER SERVER AUDIT [<your_audit_name>]


WITH (STATE=ON);
GO

Consume audit logs


Consume logs stored in Azure Storage
There are several methods you can use to view blob auditing logs.
Use the system function sys.fn_get_audit_file (T-SQL) to return the audit log data in tabular format. For
more information on using this function, see the sys.fn_get_audit_file documentation.
You can explore audit logs by using a tool such as Azure Storage Explorer. In Azure Storage, auditing logs
are saved as a collection of blob files within a container that was defined to store the audit logs. For
further details about the hierarchy of the storage folder, naming conventions, and log format, see the
Blob Audit Log Format Reference.
For a full list of audit log consumption methods, refer to Get started with Azure SQL Database auditing.
Consume logs stored in Event Hubs
To consume audit logs data from Event Hubs, you will need to set up a stream to consume events and write
them to a target. For more information, see the Azure Event Hubs documentation.
Consume and analyze logs stored in Azure Monitor logs
If audit logs are written to Azure Monitor logs, they are available in the Log Analytics workspace, where you can
run advanced searches on the audit data. As a starting point, navigate to the Log Analytics workspace. Under the
General section, click Logs and enter a simple query, such as: search "SQLSecurityAuditEvents" to view the
audit logs.
Azure Monitor logs gives you real-time operational insights using integrated search and custom dashboards to
readily analyze millions of records across all your workloads and servers. For additional useful information
about Azure Monitor logs search language and commands, see Azure Monitor logs search reference.

NOTE
This article was recently updated to use the term Azure Monitor logs instead of Log Analytics. Log data is still stored in a
Log Analytics workspace and is still collected and analyzed by the same Log Analytics service. We are updating the
terminology to better reflect the role of logs in Azure Monitor. See Azure Monitor terminology changes for details.

Auditing differences between databases in Azure SQL Managed


Instance and databases in SQL Server
The key differences between auditing in databases in Azure SQL Managed Instance and databases in SQL Server
are:
With Azure SQL Managed Instance, auditing works at the server level and stores .xel log files in Azure Blob
storage.
In SQL Server, audit works at the server level, but stores events in the file system and Windows event logs.
XEvent auditing in managed instances supports Azure Blob storage targets. File and Windows logs are not
suppor ted .
The key differences in the CREATE AUDIT syntax for auditing to Azure Blob storage are:
A new syntax TO URL is provided and enables you to specify the URL of the Azure Blob storage container
where the .xel files are placed.
A new syntax TO EXTERNAL MONITOR is provided to enable Event Hubs and Azure Monitor log targets.
The syntax TO FILE is not suppor ted because Azure SQL Managed Instance cannot access Windows file
shares.
Shutdown option is not suppor ted .
queue_delay of 0 is not suppor ted .

Next steps
For a full list of audit log consumption methods, refer to Get started with Azure SQL Database auditing.
For more information about Azure programs that support standards compliance, see the Azure Trust Center,
where you can find the most current list of compliance certifications.
Use Azure SQL Managed Instance securely with
public endpoints
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance can provide user connectivity over public endpoints. This article explains how to
make this configuration more secure.

Scenarios
Azure SQL Managed Instance provides a private endpoint to allow connectivity from inside its virtual network.
The default option is to provide maximum isolation. However, there are scenarios where you need to provide a
public endpoint connection:
The managed instance must integrate with multi-tenant-only platform-as-a-service (PaaS) offerings.
You need higher throughput of data exchange than is possible when you're using a VPN.
Company policies prohibit PaaS inside corporate networks.

Deploy a managed instance for public endpoint access


Although not mandatory, the common deployment model for a managed instance with public endpoint access
is to create the instance in a dedicated isolated virtual network. In this configuration, the virtual network is used
only for virtual cluster isolation. It doesn't matter if the managed instance's IP address space overlaps with a
corporate network's IP address space.

Secure data in motion


SQL Managed Instance data traffic is always encrypted if the client driver supports encryption. Data sent
between the managed instance and other Azure virtual machines or Azure services never leaves Azure's
backbone. If there's a connection between the managed instance and an on-premises network, we recommend
you use Azure ExpressRoute. ExpressRoute helps you avoid moving data over the public internet. For managed
instance private connectivity, only private peering can be used.

Lock down inbound and outbound connectivity


The following diagram shows the recommended security configurations:
A managed instance has a public endpoint address that is dedicated to a customer. This endpoint shares the IP
with the management endpoint but uses a different port. In the client-side outbound firewall and in the network
security group rules, set this public endpoint IP address to limit outbound connectivity.
To ensure traffic to the managed instance is coming from trusted sources, we recommend connecting from
sources with well-known IP addresses. Use a network security group to limit access to the managed instance
public endpoint on port 3342.
When clients need to initiate a connection from an on-premises network, make sure the originating address is
translated to a well-known set of IP addresses. If you can't do so (for example, a mobile workforce being a
typical scenario), we recommend you use point-to-site VPN connections and a private endpoint.
If connections are started from Azure, we recommend that traffic come from a well-known assigned virtual IP
address (for example, a virtual machine). To make managing virtual IP (VIP) addresses easier, you might want to
use public IP address prefixes.

Next steps
Learn how to configure public endpoint for manage instances: Configure public endpoint
Set up trust between instances with server trust
group (Azure SQL Managed Instance)
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Server trust group (also known as SQL trust group) is a concept used for managing trust between instances in
Azure SQL Managed Instance. By creating a group, a certificate-based trust is established between its members.
This trust can be used for different cross-instance scenarios. Removing servers from the group or deleting the
group removes the trust between the servers. To create or delete a server trust group, the user needs to have
write permissions on the managed instance. Server trust group is an Azure Resource Manager object which has
been labeled as SQL trust group in Azure portal.

Set up group
Server trust group can be setup via Azure PowerShell or Azure CLI.
To create a server trust group by using the Azure portal, follow these steps:
1. Go to the Azure portal.
2. Navigate to Azure SQL Managed Instance that you plan to add to a server trust group.
3. On the Security settings, select the SQL trust groups tab.

4. On the SQL trust groups configuration page, select the New Group icon.
5. On the SQL trust group create blade set the Group name . It needs to be unique in the group's
subscription, resource group and region. Trust scope defines the type of cross-instance scenario that is
enabled with the server trust group. Trust scope is fixed - all available functionalities are preselected and
this cannot be changed. Select Subscription and Resource group to choose the managed instances
that will be members of the group.

6. After all required fields are populated, select Save .

Edit group
To edit a server trust group, follow these steps:
1. Go to Azure portal.
2. Navigate to a managed instance that belongs to the trust group.
3. On the Security settings select the SQL trust groups tab.
4. Select the trust group you want to edit.
5. Click Configure group .

6. Add or remove managed instances from the group.


7. Click Save to confirm choice or Cancel to abandon changes.

Delete group
To delete a server trust group, follow these steps:
1. Go to the Azure portal.
2. Navigate to a managed instance that belongs to the SQL trust group.
3. On the Security settings select the SQL trust groups tab.
4. Select the trust group you want to delete.

5. Select Delete group .


6. Type in the SQL trust group name to confirm deletion and select Delete .

NOTE
Deleting the SQL trust group might not immediately remove the trust between the two managed instances. Trust removal
can be enforced by invoking a failover of managed instances. Check the Known issues for the latest updates on this.

Limitations
Following limitations apply to Server Trust Groups:
Group can contain only instances of Azure SQL Managed Instance.
Trust scope cannot be changed when a group is created or modified.
The name of the server trust group must be unique for its subscription, resource group and region.

Next steps
For more information about distributed transactions in Azure SQL Managed Instance, see Distributed
transactions.
For release updates and known issues state, see What's new?.
If you have feature requests, add them to the Managed Instance forum.
What is Windows Authentication for Azure Active
Directory principals on Azure SQL Managed
Instance? (Preview)
7/12/2022 • 2 minutes to read • Edit Online

Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL
Server database engine compatibility with the benefits of a fully managed and evergreen platform as a service.
Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure
SQL Managed Instance. Windows Authentication for managed instances empowers customers to move existing
services to the cloud while maintaining a seamless user experience and provides the basis for infrastructure
modernization.

Key capabilities and scenarios


As customers modernize their infrastructure, application, and data tiers, they also modernize their identity
management capabilities by shifting to Azure AD. Azure SQL offers multiple Azure AD Authentication options:
'Azure Active Directory - Password' offers authentication with Azure AD credentials
'Azure Active Directory - Universal with MFA' adds multi-factor authentication
'Azure Active Directory – Integrated' uses federation providers like Active Directory Federation Services
(ADFS) to enable Single Sign-On experiences
However, some legacy apps can't change their authentication to Azure AD: legacy application code may longer
be available, there may be a dependency on legacy drivers, clients may not be able to be changed, and so on.
Windows Authentication for Azure AD principals removes this migration blocker and provides support for a
broader range of customer applications.
Windows Authentication for Azure AD principals on managed instances is available for devices or virtual
machines (VMs) joined to Active Directory (AD), Azure AD, or hybrid Azure AD. An Azure AD hybrid user whose
user identity exists both in Azure AD and AD can access a managed instance in Azure using Azure AD Kerberos.
Enabling Windows Authentication for a managed instance doesn't require customers to deploy new on-
premises infrastructure or manage the overhead of setting up Domain Services.
Windows Authentication for Azure AD principals on Azure SQL Managed Instance enables two key scenarios:
migrating on-premises SQL Servers to Azure with minimal changes and modernizing security infrastructure.
Lift and shift on-premises SQL Servers to Azure with minimal changes
By enabling Windows Authentication for Azure Active Directory principals, customers can migrate to Azure SQL
Managed Instance without implementing changes to application authentication stacks or deploying Azure AD
Domain Services. Customers can also use Windows Authentication to access a managed instance from their AD
or Azure AD joined devices.
Windows Authentication for Azure Active Directory principals also enables the following patterns on managed
instances. These patterns are frequently used in traditional on-premises SQL Servers:
"Double hop" authentication : Web applications use IIS identity impersonation to run queries against an
instance in the security context of the end user.
Traces using extended events and SQL Ser ver Profiler can be launched using Windows authentication,
providing ease of use for database administrators and developers accustomed to this workflow. Learn how to
run a trace against Azure SQL Managed Instance using Windows Authentication for Azure Active Directory
principals.
Modernize security infrastructure
Enabling Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance equips
customers to modernize their security practices.
For example, a customer can enable a mobile analyst, using proven tools that rely on Windows Authentication,
to authenticate to a managed instance using biometric credentials. This can be accomplished even if the mobile
analyst works from a laptop that is joined to Azure AD.

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL
Managed Instance is implemented with Azure
Active Directory and Kerberos (Preview)
7/12/2022 • 2 minutes to read • Edit Online

Windows Authentication for Azure AD principals on Azure SQL Managed Instance enables customers to move
existing services to the cloud while maintaining a seamless user experience and provides the basis for security
infrastructure modernization. To enable Windows Authentication for Azure Active Directory (Azure AD)
principals, you will turn your Azure AD tenant into an independent Kerberos realm and create an incoming trust
in the customer domain.
This configuration allows users in the customer domain to access resources in your Azure AD tenant. It will not
allow users in the Azure AD tenant to access resources in the customer domain.
The following diagram gives an overview of how Windows Authentication is implemented for a managed
instance using Azure AD and Kerberos:

How Azure AD provides Kerberos authentication


To create an independent Kerberos realm for an Azure AD tenant, customers install the Azure AD Hybrid
Authentication Management PowerShell module on any Windows server and run a cmdlet to create an Azure
AD Kerberos object in their cloud and Active Directory. Trust created in this way enables existing Windows
clients to access Azure AD with Kerberos.
Windows 10 21H1 clients and above have been enlightened for interactive mode and do not need configuration
for interactive login flows to work. Clients running previous versions of Windows can be configured to use
Kerberos Key Distribution Center (KDC) proxy servers to use Kerberos authentication.
Kerberos authentication in Azure AD enables:
Traditional on-premises applications to move to the cloud without changing their fundamental
authentication scheme.
Applications running on enlightened clients authenticate using Azure AD directly.

How Azure SQL Managed Instance works with Azure AD and


Kerberos
Customers use the Azure portal to enable a system assigned service principal on each managed instance. The
service principal allows managed instance users to authenticate using the Kerberos protocol.

Next steps
Learn more about enabling Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
How to set up Windows Authentication for Azure
SQL Managed Instance using Azure Active
Directory and Kerberos (Preview)
7/12/2022 • 5 minutes to read • Edit Online

This article gives an overview of how to set up infrastructure and managed instances to implement Windows
Authentication for Azure AD principals on Azure SQL Managed Instance.
There are two phases to set up Windows Authentication for Azure SQL Managed Instance using Azure Active
Directory (Azure AD) and Kerberos.
One-time infrastructure setup.
Synchronize Active Directory (AD) and Azure AD, if this hasn't already been done.
Enable the modern interactive authentication flow, when available. The modern interactive flow is
recommended for organizations with Azure AD joined or Hybrid AD joined clients running Windows
10 20H1 / Windows Server 2022 and higher where clients are joined to Azure AD or Hybrid AD.
Set up the incoming trust-based authentication flow. This is recommended for customers who can’t
use the modern interactive flow, but who have AD joined clients running Windows 10 / Windows
Server 2012 and higher.
Configuration of Azure SQL Managed Instance.
Create a system assigned service principal for each managed instance.

One-time infrastructure setup


The first step in infrastructure setup is to synchronize AD with Azure AD, if this hasn't already been completed.
Following this, a system administrator configures authentication flows. Two authentication flows are available to
implement Windows Authentication for Azure AD principals on Azure SQL Managed Instance: the incoming
trust-based flow supports AD joined clients running Windows server 2012 or higher, and the modern interactive
flow supports Azure AD joined clients running Windows 10 21H1 or higher.
Synchronize AD with Azure AD
Customers should first implement Azure AD Connect to integrate on-premises directories with Azure AD.
Select which authentication flow(s) you will implement
The following diagram shows eligibility and the core functionality of the modern interactive flow and the
incoming trust-based flow:
"A decision tree showing that the modern interactive flow is suitable for clients running Windows 10 20H1 or
Windows Server 2022 or higher, where clients are Azure AD joined or Hybrid AD joined. The incoming trust-
based flow is suitable for clients running Windows 10 or Windows Server 2012 or higher where clients are AD
joined."
The modern interactive flow works with enlightened clients running Windows 10 21H1 and higher that are
Azure AD or Hybrid Azure AD joined. In the modern interactive flow, users can access Azure SQL Managed
Instance without requiring a line of sight to Domain Controllers (DCs). There is no need for a trust object to be
created in the customer's AD. To enable the modern interactive flow, an administrator will set group policy for
Kerberos authentication tickets (TGT) to be used during login.
The incoming trust-based flow works for clients running Windows 10 or Windows Server 2012 and higher. This
flow requires that clients be joined to AD and have a line of sight to AD from on-premises. In the incoming trust-
based flow, a trust object is created in the customer's AD and is registered in Azure AD. To enable the incoming
trust-based flow, an administrator will set up an incoming trust with Azure AD and set up Kerberos Proxy via
group policy.
Modern interactive authentication flow
The following prerequisites are required to implement the modern interactive authentication flow:

P REREQ UISIT E DESC RIP T IO N

Clients must run Windows 10 20H1, Windows Server 2022,


or a higher version of Windows.

Clients must be joined to Azure AD or Hybrid Azure AD. You can determine if this prerequisite is met by running the
dsregcmd command: dsregcmd.exe /status

Application must connect to the managed instance via an This supports applications such as SQL Server Management
interactive session. Studio (SSMS) and web applications, but won't work for
applications that run as a service.

Azure AD tenant.

Azure subscription under the same Azure AD tenant you


plan to use for authentication.
P REREQ UISIT E DESC RIP T IO N

Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.

See How to set up Windows Authentication for Azure Active Directory with the modern interactive flow
(Preview) for steps to enable this authentication flow.
Incoming trust-based authentication flow
The following prerequisites are required to implement the incoming trust-based authentication flow:

P REREQ UISIT E DESC RIP T IO N

Client must run Windows 10, Windows Server 2012, or a


higher version of Windows.

Clients must be joined to AD. The domain must have a You can determine if the client is joined to AD by running
functional level of Windows Server 2012 or higher. the dsregcmd command: dsregcmd.exe /status

Azure AD Hybrid Authentication Management Module. This PowerShell module provides management features for
on-premises setup.

Azure tenant.

Azure subscription under the same Azure AD tenant you


plan to use for authentication.

Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.

See How to set up Windows Authentication for Azure Active Directory with the incoming trust based flow
(Preview) for instructions on enabling this authentication flow.

Configure Azure SQL Managed Instance


The steps to set up Azure SQL Managed Instance are the same for both the incoming trust-based authentication
flow and the modern interactive authentication flow.
Prerequisites to configure a managed instance
The following prerequisites are required to configure a managed instance for Windows Authentication for Azure
AD principals:

P REREQ UISIT E DESC RIP T IO N

Az.Sql PowerShell module This PowerShell module provides management cmdlets for
Azure SQL resources. Install this module by running the
following PowerShell command:
Install-Module -Name Az.Sql

Azure Active Directory PowerShell Module This module provides management cmdlets for Azure AD
administrative tasks such as user and service principal
management. Install this module by running the following
PowerShell command: Install-Module –Name AzureAD
P REREQ UISIT E DESC RIP T IO N

A managed instance You may create a new managed instance or use an existing
managed instance.

Configure each managed instance


See Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory for steps to
configure each managed instance.

Limitations
The following limitations apply to Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Not available for Linux clients
Windows Authentication for Azure AD principals is currently supported only for client machines running
Windows.
Azure AD cached logon
Windows limits how often it connects to Azure AD, so there is a potential for user accounts to not have a
refreshed Kerberos Ticket Granting Ticket (TGT) within 4 hours of an upgrade or fresh deployment of a client
machine. User accounts who do not have a refreshed TGT results in failed ticket requests from Azure AD.
As an administrator, you can trigger an online logon immediately to handle upgrade scenarios by running the
following command on the client machine, then locking and unlocking the user session to get a refreshed TGT:

dsregcmd.exe /RefreshPrt

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
How to set up Windows Authentication for Azure
Active Directory with the modern interactive flow
(Preview)
7/12/2022 • 2 minutes to read • Edit Online

This article describes how to implement the modern interactive authentication flow to allow enlightened clients
running Windows 10 20H1, Windows Server 2022, or a higher version of Windows to authenticate to Azure
SQL Managed Instance using Windows Authentication. Clients must be joined to Azure Active Directory (Azure
AD) or Hybrid Azure AD.
Enabling the modern interactive authentication flow is one step in setting up Windows Authentication for Azure
SQL Managed Instance using Azure Active Directory and Kerberos (Preview). The incoming trust-based flow
(Preview) is available for AD joined clients running Windows 10 / Windows Server 2012 and higher.
With this preview, Azure AD is now its own independent Kerberos realm. Windows 10 21H1 clients are already
enlightened and will redirect clients to access Azure AD Kerberos to request a Kerberos ticket. The capability for
clients to access Azure AD Kerberos is switched off by default and can be enabled by modifying group policy.
Group policy can be used to deploy this feature in a staged manner by choosing specific clients you want to pilot
on and then expanding it to all the clients across your environment.

Prerequisites
There is no AD to Azure AD set up required for enabling software running on Azure AD Joined VMs to access
Azure SQL Managed Instance using Windows Authentication. The following prerequisites are required to
implement the modern interactive authentication flow:

P REREQ UISIT E DESC RIP T IO N

Clients must run Windows 10 20H1, Windows Server 2022,


or a higher version of Windows.

Clients must be joined to Azure AD or Hybrid Azure AD. You can determine if this prerequisite is met by running the
dsregcmd command: dsregcmd.exe /status

Application must connect to the managed instance via an This supports applications such as SQL Server Management
interactive session. Studio (SSMS) and web applications, but won't work for
applications that run as a service.

Azure AD tenant.

Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.

Configure group policy


Enable the following group policy setting
Administrative Templates\System\Kerberos\Allow retrieving the cloud Kerberos ticket during the logon :
1. Open the group policy editor.
2. Navigate to Administrative Templates\System\Kerberos\ .
3. Select the Allow retrieving the cloud kerberos ticket during the logon setting.

4. In the setting dialog, select Enabled .


5. Select OK .

Refresh PRT (optional)


Users with existing logon sessions may need to refresh their Azure AD Primary Refresh Token (PRT) if they
attempt to use this feature immediately after it has been enabled. It can take up to a few hours for the PRT to
refresh on its own.
To refresh PRT manually, run this command from a command prompt:

dsregcmd.exe /RefreshPrt

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
How to set up Windows Authentication for Azure
AD with the incoming trust-based flow (Preview)
7/12/2022 • 6 minutes to read • Edit Online

This article describes how to implement the incoming trust-based authentication flow to allow Active Directory
(AD) joined clients running Windows 10, Windows Server 2012, or higher versions of Windows to authenticate
to an Azure SQL Managed Instance using Windows Authentication. This article also shares steps to rotate a
Kerberos Key for your Azure Active Directory (Azure AD) service account and Trusted Domain Object, and steps
to remove a Trusted Domain Object and all Kerberos settings, if desired.
Enabling the incoming trust-based authentication flow is one step in setting up Windows Authentication for
Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview). The modern interactive flow
(Preview) is available for enlightened clients running Windows 10 20H1, Windows Server 2022, or a higher
version of Windows.

Permissions
To complete the steps outlined in this article, you will need:
An on-premises Active Directory administrator username and password.
Azure AD global administrator account username and password.

Prerequisites
To implement the incoming trust-based authentication flow, first ensure that the following prerequisites have
been met:

P REREQ UISIT E DESC RIP T IO N

Client must run Windows 10, Windows Server 2012, or a


higher version of Windows.

Clients must be joined to AD. The domain must have a You can determine if the client is joined to AD by running
functional level of Windows Server 2012 or higher. the dsregcmd command: dsregcmd.exe /status

Azure AD Hybrid Authentication Management Module. This PowerShell module provides management features for
on-premises setup.

Azure tenant.

Azure subscription under the same Azure AD tenant you


plan to use for authentication.

Azure AD Connect installed. Hybrid environments where identities exist both in Azure AD
and AD.

Create and configure the Azure AD Kerberos Trusted Domain Object


To create and configure the Azure AD Kerberos Trusted Domain Object, you will install the Azure AD Hybrid
Authentication Management PowerShell module.
You will then use the Azure AD Hybrid Authentication Management PowerShell module to set up a Trusted
Domain Object in the on-premises AD domain and register trust information with Azure AD. This creates an in-
bound trust relationship into the on-premises AD, which enables on-premises AD to trust Azure AD.
Set up the Trusted Domain Object
To set up the Trusted Domain Object, first install the Azure AD Hybrid Authentication Management PowerShell
module.
Install the Azure AD Hybrid Authentication Management PowerShell module
1. Start a Windows PowerShell session with the Run as administrator option.
2. Install the Azure AD Hybrid Authentication Management PowerShell module using the following script.
The script:
Enables TLS 1.2 for communication.
Installs the NuGet package provider.
Registers the PSGallery repository.
Installs the PowerShellGet module.
Installs the Azure AD Hybrid Authentication Management PowerShell module.
The Azure AD Hybrid Authentication Management PowerShell uses the AzureADPreview
module, which provides advanced Azure AD management feature.
To protect against unnecessary installation conflicts with AzureAD PowerShell module, this
command includes the –AllowClobber option flag.

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

Install-PackageProvider -Name NuGet -Force

if (@(Get-PSRepository | ? {$_.Name -eq "PSGallery"}).Count -eq 0){


Register-PSRepository -DefaultSet-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
}

Install-Module -Name PowerShellGet -Force

Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber

Create the Trusted Domain Object


1. Start a Windows PowerShell session with the Run as administrator option.
2. Set the common parameters. Customize the script below prior to running it.
Set the $domain parameter to your on-premises Active Directory domain name.
When prompted by Get-Credential , enter an on-premises Active Directory administrator username
and password.
Set the $cloudUserName parameter to the username of a Global Administrator privileged account for
Azure AD cloud access.

NOTE
If you wish to use your current Windows login account for your on-premises Active Directory access, you can skip
the step where credentials are assigned to the $domainCred parameter. If you take this approach, do not include
the -DomainCredential parameter in the PowerShell commands following this step.
$domain = "your on-premesis domain name, for example contoso.com"

$domainCred = Get-Credential

$cloudUserName = "Azure AD user principal name, for example admin@contoso.onmicrosoft.com"

3. Check the current Kerberos Domain Settings.


Run the following command to check your domain's current Kerberos settings:

Get-AzureAdKerberosServer -Domain $domain `


-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName

If this is the first time calling any Azure AD Kerberos command, you will be prompted for Azure AD cloud
access.
Enter the password for your Azure AD global administrator account.
If your organization uses other modern authentication methods such as MFA (Azure Multi-Factor
Authentication) or Smart Card, follow the instructions as requested for sign in.
If this is the first time you're configuring Azure AD Kerberos settings, the Get-AzureAdKerberosServer
cmdlet will display empty information, as in the following sample output:

ID :
UserAccount :
ComputerAccount :
DisplayName :
DomainDnsName :
KeyVersion :
KeyUpdatedOn :
KeyUpdatedFrom :
CloudDisplayName :
CloudDomainDnsName :
CloudId :
CloudKeyVersion :
CloudKeyUpdatedOn :
CloudTrustDisplay :

If your domain already supports FIDO authentication, the Get-AzureAdKerberosServer cmdlet will display
Azure AD Service account information, as in the following sample output. Note that the
CloudTrustDisplay field returns an empty value.

ID : 25614
UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
DisplayName : krbtgt_25614
DomainDnsName : aadsqlmi.net
KeyVersion : 53325
KeyUpdatedOn : 2/24/2022 9:03:15 AM
KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
CloudDisplayName : krbtgt_25614
CloudDomainDnsName : aadsqlmi.net
CloudId : 25614
CloudKeyVersion : 53325
CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
CloudTrustDisplay :

4. Add the Trusted Domain Object.


Run the Set-AzureAdKerberosServer PowerShell cmdlet to add the Trusted Domain Object. Be sure to
include -SetupCloudTrust parameter. If there is no Azure AD service account, this command will create a
new Azure AD service account. If there is an Azure AD service account already, this command will only
create the requested Trusted Domain object.

Set-AzureAdKerberosServer -Domain $domain `


-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName `
-SetupCloudTrust

After creating the Trusted Domain Object, you can check the updated Kerberos Settings using the
Get-AzureAdKerberosServer PowerShell cmdlet, as shown in the previous step. If the
Set-AzureAdKerberosServer cmdlet has been run successfully with the -SetupCloudTrust parameter, the
CloudTrustDisplay field should now return Microsoft.AzureAD.Kdc.Service.TrustDisplay , as in the
following sample output:

ID : 25614
UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
DisplayName : krbtgt_25614
DomainDnsName : aadsqlmi.net
KeyVersion : 53325
KeyUpdatedOn : 2/24/2022 9:03:15 AM
KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
CloudDisplayName : krbtgt_25614
CloudDomainDnsName : aadsqlmi.net
CloudId : 25614
CloudKeyVersion : 53325
CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
CloudTrustDisplay : Microsoft.AzureAD.Kdc.Service.TrustDisplay

Configure the Group Policy Object (GPO)


1. Identify your Azure AD tenant ID.
2. Deploy the following Group Policy setting to client machines using the incoming trust-based flow:
a. Edit the Administrative Templates\System\Kerberos\Specify KDC proxy ser vers for
Kerberos clients policy setting.
b. Select Enabled .
c. Under Options , select Show.... This opens the Show Contents dialog box.
d. Define the KDC proxy servers settings using mappings as follows. Substitute your Azure AD tenant
ID for the your_Azure_AD_tenant_id placeholder. Note the space following https and the space
prior to the closing / in the value mapping.

VA L UE N A M E VA L UE

KERBEROS.MICROSOFTONLINE.COM <https login.microsoftonline.com:443:


your_Azure_AD_tenant_id /kerberos />

e. Select OK to close the 'Show Contents' dialog box.


f. Select Apply on the 'Specify KDC proxy servers for Kerberos clients' dialog box.

Rotate the Kerberos Key


You may periodically rotate the Kerberos Key for the created Azure AD Service account and Trusted Domain
Object for management purposes.
Set-AzureAdKerberosServer -Domain $domain `
-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName -SetupCloudTrust `
-RotateServerKey

Once the key is rotated, it takes several hours to propagate the changed key between the Kerberos KDC servers.
Due to this key distribution timing, you are limited to rotating key once within 24 hours. If you need to rotate the
key again within 24 hours with any reason, for example, just after creating the Trusted Domain Object, you can
add the -Force parameter:

Set-AzureAdKerberosServer -Domain $domain `


-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName -SetupCloudTrust `
-RotateServerKey -Force

Remove the Trusted Domain Object


You can remove the added Trusted Domain Object using the following command:

Remove-AzureADKerberosTrustedDomainObject -Domain $domain `


-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName

This command will only remove the Trusted Domain Object. If your domain supports FIDO authentication, you
can remove the Trusted Domain Object while maintaining the Azure AD Service account required for the FIDO
authentication service.

Remove all Kerberos Settings


You can remove both the Azure AD Service account and the Trusted Domain Object using the following
command:

Remove-AzureAdKerberosServer -Domain $domain `


-DomainCredential $domainCred `
-UserPrincipalName $cloudUserName

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
Configure Azure SQL Managed Instance for
Windows Authentication for Azure Active Directory
(Preview)
7/12/2022 • 2 minutes to read • Edit Online

This article describes how to configure a managed instance to support Windows Authentication for Azure AD
principals. The steps to set up Azure SQL Managed Instance are the same for both the incoming trust-based
authentication flow and the modern interactive authentication flow.

Prerequisites
The following prerequisites are required to configure a managed instance for Windows Authentication for Azure
AD principals:

P REREQ UISIT E DESC RIP T IO N

Az.Sql PowerShell module This PowerShell module provides management cmdlets for
Azure SQL resources.

Install this module by running the following PowerShell


command: Install-Module -Name Az.Sql

Azure Active Directory PowerShell Module This module provides management cmdlets for Azure AD
administrative tasks such as user and service principal
management.

Install this module by running the following PowerShell


command: Install-Module –Name AzureAD

A managed instance You may create a new managed instance or use an existing
managed instance. You must enable Azure AD
authentication on the managed instance.

Configure Azure AD Authentication for Azure SQL Managed Instance


To enable Windows Authentication for Azure AD Principals, you need to enable a system assigned service
principal on each managed instance. The system assigned service principal allows managed instance users to
authenticate using the Kerberos protocol. You also need to grant admin consent to each service principal.
Enable a system assigned service principal
To enable a system assigned service principal for a managed instance:
1. Sign in to the Azure portal.
2. Navigate to your managed instance
3. Select Identity .
4. Set System assigned ser vice principal to On .
5. Select Save .
Grant admin consent to a system assigned service principal
1. Sign in to the Azure portal.
2. Open Azure Active Directory.
3. Select App registrations .
4. Select All applications .

5. Select the application with the display name matching your managed instance. The name will be in the
format: <managedinstancename> principal .
6. Select API permissions .
7. Select Grant admin consent .

8. Select Yes on the prompt to Grant admin consent confirmation .


Connect to the managed instance with Windows Authentication
If you have already implemented either the incoming trust-based authentication flow or the modern interactive
authentication flow, depending on the version of your client, you can now test connecting to your managed
instance with Windows Authentication.
To test the connection with SQL Server Management Studio (SSMS), follow the steps in Quickstart: Use SSMS to
connect to and query Azure SQL Database or Azure SQL Managed Instance. Select Windows Authentication
as your authentication type.

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
Run a trace against Azure SQL Managed Instance
using Windows Authentication for Azure Active
Directory principals (preview)
7/12/2022 • 3 minutes to read • Edit Online

This article shows how to connect and run a trace against Azure SQL Managed Instance using Windows
Authentication for Azure Active Directory (Azure AD) principals. Windows authentication provides a convenient
way for customers to connect to a managed instance, especially for database administrators and developers
who are accustomed to launching SQL Server Management Studio (SSMS) with their Windows credentials.
This article shares two options to run a trace against a managed instance: you can trace with extended events or
with SQL Server Profiler. While SQL Server Profiler may still be used, the trace functionality used by SQL Server
Profiler is deprecated and will be removed in a future version of Microsoft SQL Server.

Prerequisites
To use Windows Authentication to connect to and run a trace against a managed instance, you must first meet
the following prerequisites:
Set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos
(Preview).
Install SQL Server Management Studio (SSMS) on the client that is connecting to the managed instance. The
SSMS installation includes SQL Server Profiler and built-in components to create and run extended events
traces.
Enable tooling on your client machine to connect to the managed instance. This may be done by any of the
following:
Configure an Azure VM to connect to Azure SQL Managed Instance.
Configure a point-to-site connection to Azure SQL Managed Instance from on-premises.
Configure a public endpoint in Azure SQL Managed Instance.
To create or modify extended events sessions, ensure that your account has the server permission of ALTER
ANY EVENT SESSION on the managed instance.
To create or modify traces in SQL Server Profiler, ensure that your account has the server permission of
ALTER TRACE on the managed instance.
If you have not yet enabled Windows authentication for Azure AD principals against your managed instance,
you may run a trace against a managed instance using an Azure AD Authentication option, including:
'Azure Active Directory - Password'
'Azure Active Directory - Universal with MFA'
'Azure Active Directory – Integrated'

Run a trace with extended events


To run a trace with extended events against a managed instance using Windows Authentication, you will first
connect Object Explorer to your managed instance using Windows Authentication.
1. Launch SQL Server Management Studio from a client machine where you have logged in using Windows
Authentication.
2. The 'Connect to Server' dialog box should automatically appear. If it does not, ensure that Object
Explorer is open and select Connect .
3. Enter the name of your managed instance as the Ser ver name . The name of your managed instance
should be in a format similar to managedinstancename.12a34b5c67ce.database.windows.net .
4. After Authentication , select Windows Authentication .

5. Select Connect .
Now that Object Explorer is connected, you can create and run an extended events trace. Follow the steps in
Quick Start: Extended events in SQL Server to learn how to create, test, and display the results of an extended
events session.

Run a trace with Profiler


To run a trace with SQL Server Profiler against a managed instance using Windows Authentication, launch the
Profiler application. Profiler may be run from the Windows Start menu or from SQL Server Management Studio.
1. On the File menu, select New Trace .
2. Enter the name of your managed instance as the Ser ver name . The name of your managed instance
should be in a format similar to managedinstancename.12a34b5c67ce.database.windows.net .
3. After Authentication , select Windows Authentication .

4. Select Connect .
5. Follow the steps in Create a Trace (SQL Server Profiler) to configure the trace.
6. Select Run after configuring the trace.
Next steps
Learn more about Windows Authentication for Azure AD principals with Azure SQL Managed Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
Extended Events
Troubleshoot Windows Authentication for Azure AD
principals on Azure SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

This article contains troubleshooting steps for use when implementing Windows Authentication for Azure AD
principals.

Verify tickets are getting cached


Use the klist command to display a list of currently cached Kerberos tickets.
The klist get krbtgt command should return a ticket from the on-premises Active Directory realm.

klist get krbtgt/kerberos.microsoftonline.com

The klist get MSSQLSvc command should return a ticket from the kerberos.microsoftonline.com realm with a
Service Principal Name (SPN) to MSSQLSvc/<miname>.<dnszone>.database.windows.net:1433 .

klist get MSSQLSvc/<miname>.<dnszone>.database.windows.net:1433

The following are some well-known error codes:


0x6fb: SQL SPN not found - Check that you’ve entered a valid SPN. If you've implemented the incoming
trust-based authentication flow, revisit steps to create and configure the Azure AD Kerberos Trusted Domain
Object to validate that you’ve performed all the configuration steps.
0x51f - This error is likely related to a conflict with the Fiddler tool. Turn on Fiddler to mitigate the issue.

Investigate message flow failures


Use Wireshark, or the network traffic analyzer of your choice, to monitor traffic between the client and on-prem
Kerberos Key Distribution Center (KDC).
When using Wireshark the following is expected:
AS-REQ: Client => on-prem KDC => returns on-prem TGT.
TGS-REQ: Client => on-prem KDC => returns referral to kerberos.microsoftonline.com .

Next steps
Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed
Instance:
What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance?
(Preview)
How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and
Kerberos (Preview)
How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory
and Kerberos (Preview)
How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
Azure SQL Managed Instance content reference
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


In this article you can find a content reference to various guides, scripts, and explanations that help you manage
and configure Azure SQL Managed Instance.

Load data
SQL Server to Azure SQL Managed Instance Guide: Learn about the recommended migration process and
tools for migration to Azure SQL Managed Instance.
Migrate TDE cert to Azure SQL Managed Instance: If your SQL Server database is protected with transparent
data encryption (TDE), you would need to migrate the certificate that SQL Managed Instance can use to
decrypt the backup that you want to restore in Azure.
Import a DB from a BACPAC
Export a DB to BACPAC
Load data with BCP
Load data with Azure Data Factory

Network configuration
Determine subnet size: Since the subnet cannot be resized after SQL Managed Instance is deployed, you need
to calculate what IP range of addresses is required for the number and types of managed instances you plan
to deploy to the subnet.
Create a new VNet and subnet: Configure the virtual network and subnet according to the network
requirements.
Configure an existing VNet and subnet: Verify network requirements and configure your existing virtual
network and subnet to deploy SQL Managed Instance.
Configure service endpoint policies for Azure Storage (Preview): Secure your subnet against erroneous or
malicious data exfiltration into unauthorized Azure Storage accounts.
Configure custom DNS: Configure custom DNS to grant external resource access to custom domains from
SQL Managed Instance via a linked server of db mail profiles.
Find the management endpoint IP address: Determine the public endpoint that SQL Managed Instance is
using for management purposes.
Verify built-in firewall protection: Verify that SQL Managed Instance allows traffic only on necessary ports,
and other built-in firewall rules.
Connect applications: Learn about different patterns for connecting the applications to SQL Managed
Instance.

Feature configuration
Configure Azure AD auth
Configure conditional access
Multi-factor Azure AD auth
Configure multi-factor auth
Configure auto-failover group to automatically failover all databases on an instance to a secondary instance
in another region in the event of a disaster.
Configure a temporal retention policy
Configure In-Memory OLTP
Configure Azure Automation
Transactional replication enables you to replicate your data between managed instances, or from SQL Server
on-premises to SQL Managed Instance, and vice versa.
Configure threat detection – threat detection is a built-in Azure SQL Managed Instance feature that detects
various potential attacks such as SQL injection or access from suspicious locations.
Creating alerts enables you to set up alerts on monitored metrics such as CPU utilization, storage space
consumption, IOPS and others for SQL Managed Instance.
Transparent Data Encryption
Configure TDE with BYOK
Rotate TDE BYOK keys
Remove a TDE protector
Managed Instance link feature
Prepare environment for link feature
Replicate database with link feature in SSMS
Replicate database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
Failover database with link feature in SSMS - Azure SQL Managed Instance
Failover (migrate) database with Azure SQL Managed Instance link feature with T-SQL and PowerShell scripts
Best practices with link feature for Azure SQL Managed Instance

Monitoring and tuning


Manual tuning
Use DMVs to monitor performance
Use Query Store to monitor performance
Troubleshoot performance with Intelligent Insights
Use the Intelligent Insights diagnostics log
Monitor In-Memory OLTP space
Extended events
Extended events
Store extended events into an event file
Store extended events into a ring buffer
Alerting
Create alerts on managed instance

Operations
User-initiated manual failover on SQL Managed Instance

Develop applications
Connectivity
Use Spark Connector
Authenticate an app
Use batching for better performance
Connectivity guidance
DNS aliases
Set up a DNS alias by using PowerShell
Ports - ADO.NET
C and C ++
Excel

Design applications
Design for disaster recovery
Design for elastic pools
Design for app upgrades
Design Multi-tenant SaaS applications
SaaS design patterns
SaaS video indexer
SaaS app security

Next steps
Get started by deploying SQL Managed Instance.
Connect your application to Azure SQL Managed
Instance
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Today you have multiple choices when deciding how and where you host your application.
You may choose to host application in the cloud by using Azure App Service or some of Azure's virtual network
integrated options like Azure App Service Environment, Azure Virtual Machines, and virtual machine scale sets.
You could also take hybrid cloud approach and keep your applications on-premises.
Whatever choice you make, you can connect it to Azure SQL Managed Instance.
This article describes how to connect an application to Azure SQL Managed Instance in a number of different
application scenarios from inside the virtual network.

IMPORTANT
You can also enable data access to your managed instance from outside a virtual network. You are able to access your
managed instance from multi-tenant Azure services like Power BI, Azure App Service, or an on-premises network that are
not connected to a VPN by using the public endpoint on a managed instance. You will need to enable public endpoint on
the managed instance and allow public endpoint traffic on the network security group associated with the managed
instance subnet. See more important details on Configure public endpoint in Azure SQL Managed Instance.

Connect inside the same VNet


Connecting an application inside the same virtual network as SQL Managed Instance is the simplest scenario.
Virtual machines inside the virtual network can connect to each other directly even if they are inside different
subnets. That means that all you need to connect an application inside App Service Environment or a virtual
machine is to set the connection string appropriately.
Connect inside a different VNet
Connecting an application when it resides within a different virtual network from SQL Managed Instance is a bit
more complex because SQL Managed Instance has private IP addresses in its own virtual network. To connect,
an application needs access to the virtual network where SQL Managed Instance is deployed. So you need to
make a connection between the application and the SQL Managed Instance virtual network. The virtual
networks don't have to be in the same subscription in order for this scenario to work.
There are two options for connecting virtual networks:
Azure VNet peering
VNet-to-VNet VPN gateway (Azure portal, PowerShell, Azure CLI)
Peering is preferable because it uses the Microsoft backbone network, so from the connectivity perspective,
there is no noticeable difference in latency between virtual machines in a peered virtual network and in the
same virtual network. Virtual network peering is to supported between the networks in the same region. Global
virtual network peering is also supported with the limitation described in the note below.

IMPORTANT
On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced. It means that
global virtual network peering is supported for SQL managed instances created in empty subnets after the
announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL
managed instances peering support is limited to the networks in the same region due to the constraints of global virtual
network peering. See also the relevant section of the Azure Virtual Networks frequently asked questions article for more
details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before
the announcement date, consider configuring maintenance window on the instances, as it will move the instances into
new virtual clusters that support global virtual network peering.

Connect from on-premises


You can also connect your on-premises application to SQL Managed Instance via virtual network (private IP
address). In order to access it from on-premises, you need to make a site-to-site connection between the
application and the SQL Managed Instance virtual network. For data access to your managed instance from
outside a virtual network see Configure public endpoint in Azure SQL Managed Instance.
There are two options for how to connect on-premises to an Azure virtual network:
Site-to-site VPN connection (Azure portal, PowerShell, Azure CLI)
Azure ExpressRoute connection
If you've established an on-premises to Azure connection successfully and you can't establish a connection to
SQL Managed Instance, check if your firewall has an open outbound connection on SQL port 1433 as well as the
11000-11999 range of ports for redirection.

Connect the developer box


It is also possible to connect your developer box to SQL Managed Instance. In order to access it from your
developer box via virtual network, you first need to make a connection between your developer box and the
SQL Managed Instance virtual network. To do so, configure a point-to-site connection to a virtual network using
native Azure certificate authentication. For more information, see Configure a point-to-site connection to
connect to Azure SQL Managed Instance from an on-premises computer.
For data access to your managed instance from outside a virtual network see Configure public endpoint in
Azure SQL Managed Instance.
Connect with VNet peering
Another scenario implemented by customers is where a VPN gateway is installed in a separate virtual network
and subscription from the one hosting SQL Managed Instance. The two virtual networks are then peered. The
following sample architecture diagram shows how this can be implemented.

Once you have the basic infrastructure set up, you need to modify some settings so that the VPN gateway can
see the IP addresses in the virtual network that hosts SQL Managed Instance. To do so, make the following very
specific changes under the Peering settings .
1. In the virtual network that hosts the VPN gateway, go to Peerings , go to the peered virtual network
connection for SQL Managed Instance, and then click Allow Gateway Transit .
2. In the virtual network that hosts SQL Managed Instance, go to Peerings , go to the peered virtual network
connection for the VPN gateway, and then click Use remote gateways .

Connect Azure App Service


You can also connect an application that's hosted by Azure App Service. In order to access it from Azure App
Service via virtual network, you first need to make a connection between the application and the SQL Managed
Instance virtual network. See Integrate your app with an Azure virtual network. For data access to your
managed instance from outside a virtual network see Configure public endpoint in Azure SQL Managed
Instance.
For troubleshooting Azure App Service access via virtual network, see Troubleshooting virtual networks and
applications.
A special case of connecting Azure App Service to SQL Managed Instance is when you integrate Azure App
Service to a network peered to a SQL Managed Instance virtual network. That case requires the following
configuration to be set up:
SQL Managed Instance virtual network must NOT have a gateway
SQL Managed Instance virtual network must have the Use remote gateways option set
Peered virtual network must have the Allow gateway transit option set

This scenario is illustrated in the following diagram:


NOTE
The virtual network integration feature does not integrate an app with a virtual network that has an ExpressRoute
gateway. Even if the ExpressRoute gateway is configured in coexistence mode, virtual network integration does not work.
If you need to access resources through an ExpressRoute connection, then you can use App Service Environment, which
runs in your virtual network.

Troubleshooting connectivity issues


For troubleshooting connectivity issues, review the following:
If you are unable to connect to SQL Managed Instance from an Azure virtual machine within the same
virtual network but a different subnet, check if you have a Network Security Group set on VM subnet that
might be blocking access. Additionally, open outbound connection on SQL port 1433 as well as ports in
the range 11000-11999, since those are needed for connecting via redirection inside the Azure boundary.
Ensure that BGP Propagation is set to Enabled for the route table associated with the virtual network.
If using P2S VPN, check the configuration in the Azure portal to see if you see Ingress/Egress numbers.
Non-zero numbers indicate that Azure is routing traffic to/from on-premises.
Check that the client machine (that is running the VPN client) has route entries for all the virtual networks
that you need to access. The routes are stored in
%AppData%\Roaming\Microsoft\Network\Connections\Cm\<GUID>\routes.txt .

As shown in this image, there are two entries for each virtual network involved and a third entry for the
VPN endpoint that is configured in the portal.
Another way to check the routes is via the following command. The output shows the routes to the
various subnets:
C:\ >route print -4
===========================================================================
Interface List
14...54 ee 75 67 6b 39 ......Intel(R) Ethernet Connection (3) I218-LM
57...........................rndatavnet
18...94 65 9c 7d e5 ce ......Intel(R) Dual Band Wireless-AC 7265
1...........................Software Loopback Interface 1
Adapter===========================================================================

IPv4 Route Table


===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 10.83.72.1 10.83.74.112 35
10.0.0.0 255.255.255.0 On-link 172.26.34.2 43
10.4.0.0 255.255.255.0 On-link 172.26.34.2 43
===========================================================================
Persistent Routes:
None

If you're using virtual network peering, ensure that you have followed the instructions for setting Allow
Gateway Transit and Use Remote Gateways.
If you're using virtual network peering to connect an Azure App Service hosted application, and the SQL
Managed Instance virtual network has a public IP address range, make sure that your hosted application
settings allow your outbound traffic to be routed to public IP networks. Follow the instructions in
Regional virtual network integration.

Required versions of drivers and tools


The following minimal versions of the tools and drivers are recommended if you want to connect to SQL
Managed Instance:

DRIVER/ TO O L VERSIO N

.NET Framework 4.6.1 (or .NET Core)

ODBC driver v17

PHP driver 5.2.0

JDBC driver 6.4.0

Node.js driver 2.1.1

OLEDB driver 18.0.2.0

SSMS 18.0 or higher

SMO 150 or higher

Next steps
For information about SQL Managed Instance, see What is SQL Managed Instance?.
For a tutorial showing you how to create a new managed instance, see Create a managed instance.
Automate management tasks using SQL Agent jobs
in Azure SQL Managed Instance
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Using SQL Server Agent in SQL Server and SQL Managed Instance, you can create and schedule jobs that could
be periodically executed against one or many databases to run Transact-SQL (T-SQL) queries and perform
maintenance tasks. This article covers the use of SQL Agent for SQL Managed Instance.

NOTE
SQL Agent is not available in Azure SQL Database or Azure Synapse Analytics. Instead, we recommend Job automation
with Elastic Jobs.

SQL Agent job limitations in SQL Managed Instance


It is worth noting the differences between SQL Agent available in SQL Server and as part of SQL Managed
Instance. For more on the supported feature differences between SQL Server and SQL Managed Instance, see
Azure SQL Managed Instance T-SQL differences from SQL Server.
Some of the SQL Agent features that are available in SQL Server are not supported in SQL Managed Instance:
SQL Agent settings are read only.
The system stored procedure sp_set_agent_properties is not supported.
Enabling/disabling SQL Agent is currently not supported. SQL Agent is always running.
Notifications are partially supported:
Pager is not supported.
NetSend is not supported.
Alerts are not supported.
Proxies are not supported.
Eventlog is not supported.
Job schedule trigger based on an idle CPU is not supported.

When to use SQL Agent jobs


There are several scenarios when you could use SQL Agent jobs:
Automate management tasks and schedule them to run every weekday, after hours, etc.
Deploy schema changes, credentials management, performance data collection or tenant (customer)
telemetry collection.
Update reference data (information common across all databases), load data from Azure Blob storage.
Microsoft recommends using SHARED ACCESS SIGNATURE authentication to authenticate to Azure
Blob storage.
Common maintenance tasks including DBCC CHECKDB to ensure data integrity or index maintenance to
improve query performance. Configure jobs to execute across a collection of databases on a recurring
basis, such as during off-peak hours.
Collect query results from a set of databases into a central table on an on-going basis. Performance
queries can be continually executed and configured to trigger additional tasks to be executed.
Collect data for reporting
Aggregate data from a collection of databases into a single destination table.
Execute longer running data processing queries across a large set of databases, for example the
collection of customer telemetry. Results are collected into a single destination table for further
analysis.
Data movements
Create jobs that replicate changes made in your databases to other databases or collect updates made
in remote databases and apply changes in the database.
Create jobs that load data from or to your databases using SQL Server Integration Services (SSIS).

SQL Agent jobs in SQL Managed Instance


SQL Agent Jobs are executed by the SQL Agent service that continues to be used for task automation in SQL
Server and SQL Managed Instance.
SQL Agent Jobs are a specified series of T-SQL scripts against your database. Use jobs to define an
administrative task that can be run one or more times and monitored for success or failure.
A job can run on one local server or on multiple remote servers. SQL Agent Jobs are an internal Database
Engine component that is executed within the SQL Managed Instance service.
There are several key concepts in SQL Agent Jobs:
Job steps set of one or many steps that should be executed within the job. For every job step you can define
retry strategy and the action that should happen if the job step succeeds or fails.
Schedules define when the job should be executed.
Notifications enable you to define rules that will be used to notify operators via email once the job
completes.
SQL Agent job steps
SQL Agent Job steps are sequences of actions that SQL Agent should execute. Every step has the following step
that should be executed if the step succeeds or fails, number of retries in a case of failure.
SQL Agent enables you to create different types of job steps, such as Transact-SQL job steps that execute a single
Transact-SQL batch against the database, or OS command/PowerShell steps that can execute custom OS script,
SSIS job steps that enable you to load data using SSIS runtime, or replication steps that can publish changes
from your database to other databases.

NOTE
For more information on leveraging the Azure SSIS Integration Runtime with SSISDB hosted by SQL Managed Instance,
see Use Azure SQL Managed Instance with SQL Server Integration Services (SSIS) in Azure Data Factory.

Transactional replication can replicate the changes from your tables into other databases in SQL Managed
Instance, Azure SQL Database, or SQL Server. For information, see Configure replication in Azure SQL Managed
Instance.
Other types of job steps are not currently supported in SQL Managed Instance, including:
Merge replication job step is not supported.
Queue Reader is not supported.
Analysis Services are not supported
SQL Agent job schedules
A schedule specifies when a job runs. More than one job can run on the same schedule, and more than one
schedule can apply to the same job.
A schedule can define the following conditions for the time when a job runs:
Whenever SQL Server Agent starts. Job is activated after every failover.
One time, at a specific date and time, which is useful for delayed execution of some job.
On a recurring schedule.
For more information on scheduling a SQL Agent job, see Schedule a Job.

NOTE
Azure SQL Managed Instance currently does not enable you to start a job when the CPU is idle.

SQL Agent job notifications


SQL Agent jobs enable you to get notifications when the job finishes successfully or fails. You can receive
notifications via email.
If it isn't already enabled, first you would need to configure the Database Mail feature on SQL Managed Instance:

GO
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'Database Mail XPs', 1;
GO
RECONFIGURE

As an example exercise, set up the email account that will be used to send the email notifications. Assign the
account to the email profile called AzureManagedInstance_dbmail_profile . To send e-mail using SQL Agent jobs in
SQL Managed Instance, there should be a profile that must be called AzureManagedInstance_dbmail_profile .
Otherwise, SQL Managed Instance will be unable to send emails via SQL Agent.

NOTE
For the mail server, we recommend you use authenticated SMTP relay services to send email. These relay services typically
connect through TCP ports 25 or 587 for connections over TLS, or port 465 for SSL connections, however Database Mail
can be configured to use any port. These ports require a new outbound rule in your managed instance's network security
group. These services are used to maintain IP and domain reputation to minimize the possibility that external domains
reject your messages or put them to the SPAM folder. Consider an authenticated SMTP relay service already in your on-
premises servers. In Azure, SendGrid is one such SMTP relay service, but there are others.

Use the following sample script to create a Database Mail account and profile, then associate them together:
-- Create a Database Mail account
EXECUTE msdb.dbo.sysmail_add_account_sp
@account_name = 'SQL Agent Account',
@description = 'Mail account for Azure SQL Managed Instance SQL Agent system.',
@email_address = '$(loginEmail)',
@display_name = 'SQL Agent Account',
@mailserver_name = '$(mailserver)' ,
@username = '$(loginEmail)' ,
@password = '$(password)';

-- Create a Database Mail profile


EXECUTE msdb.dbo.sysmail_add_profile_sp
@profile_name = 'AzureManagedInstance_dbmail_profile',
@description = 'E-mail profile used for messages sent by Managed Instance SQL Agent.';

-- Add the account to the profile


EXECUTE msdb.dbo.sysmail_add_profileaccount_sp
@profile_name = 'AzureManagedInstance_dbmail_profile',
@account_name = 'SQL Agent Account',
@sequence_number = 1;

Test the Database Mail configuration via T-SQL using the sp_send_db_mail system stored procedure:

DECLARE @body VARCHAR(4000) = 'The email is sent from ' + @@SERVERNAME;


EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'AzureManagedInstance_dbmail_profile',
@recipients = 'ADD YOUR EMAIL HERE',
@body = 'Add some text',
@subject = 'Azure SQL Instance - test email';

You can notify the operator that something happened with your SQL Agent jobs. An operator defines contact
information for an individual responsible for the maintenance of one or more instances in SQL Managed
Instance. Sometimes, operator responsibilities are assigned to one individual.
In systems with multiple instances in SQL Managed Instance or SQL Server, many individuals can share operator
responsibilities. An operator does not contain security information, and does not define a security principal.
Ideally, an operator is not an individual whose responsibilities may change, but an email distribution group.
You can create operators using SQL Server Management Studio (SSMS) or the Transact-SQL script shown in the
following example:

EXEC msdb.dbo.sp_add_operator
@name=N'AzureSQLTeam',
@enabled=1,
@email_address=N'AzureSQLTeamn@contoso.com';

Confirm the email's success or failure via the Database Mail Log in SSMS.
You can then modify any SQL Agent job and assign operators that will be notified via email if the job completes,
fails, or succeeds using SSMS or the following T-SQL script:

EXEC msdb.dbo.sp_update_job @job_name=N'Load data using SSIS',


@notify_level_email=3, -- Options are: 1 on succeed, 2 on failure, 3 on complete
@notify_email_operator_name=N'AzureSQLTeam';

SQL Agent job history


SQL Managed Instance currently doesn't allow you to change any SQL Agent properties because they are stored
in the underlying registry values. This means options for adjusting the Agent retention policy for job history
records are fixed at the default of 1000 total records and max 100 history records per job.
For more information, see View SQL Agent job history.
SQL Agent fixed database role membership
If users linked to non-sysadmin logins are added to any of the three SQL Agent fixed database roles in the msdb
system database, there exists an issue in which explicit EXECUTE permissions need to be granted to three system
stored procedures in the master database. If this issue is encountered, the error message "The EXECUTE
permission was denied on the object <object_name> (Microsoft SQL Server, Error: 229)" will be shown.
Once you add users to a SQL Agent fixed database role (SQLAgentUserRole, SQLAgentReaderRole, or
SQLAgentOperatorRole) in msdb , for each of the user's logins added to these roles, execute the below T-SQL
script to explicitly grant EXECUTE permissions to the system stored procedures listed. This example assumes that
the user name and login name are the same:

USE [master]
GO
CREATE USER [login_name] FOR LOGIN [login_name];
GO
GRANT EXECUTE ON master.dbo.xp_sqlagent_enum_jobs TO [login_name];
GRANT EXECUTE ON master.dbo.xp_sqlagent_is_starting TO [login_name];
GRANT EXECUTE ON master.dbo.xp_sqlagent_notify TO [login_name];

Learn more
What is Azure SQL Managed Instance?
What's new in Azure SQL Managed Instance?
Azure SQL Managed Instance T-SQL differences from SQL Server
Features comparison: Azure SQL Database and Azure SQL Managed Instance

Next steps
Configure Database Mail
Troubleshoot outbound SMTP connectivity problems in Azure
Time zones in Azure SQL Managed Instance
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Coordinated Universal Time (UTC) is the recommended time zone for the data tier of cloud solutions. Azure SQL
Managed Instance also offers a choice of time zones to meet the needs of existing applications that store date
and time values and call date and time functions with an implicit context of a specific time zone.
T-SQL functions like GETDATE() or CLR code observe the time zone set on the instance level. SQL Server Agent
jobs also follow schedules according to the time zone of the instance.

NOTE
Azure SQL Database does not support time zone settings; it always follows UTC. Use AT TIME ZONE in SQL Database if
you need to interpret date and time information in a non-UTC time zone.

Supported time zones


A set of supported time zones is inherited from the underlying operating system of the managed instance. It's
regularly updated to get new time zone definitions and reflect changes to the existing ones.
Daylight saving time/time zone changes policy guarantees historical accuracy from 2010 forward.
A list with names of the supported time zones is exposed through the sys.time_zone_info system view.

Set a time zone


A time zone of a managed instance can be set during instance creation only. The default time zone is UTC.

NOTE
The time zone of an existing managed instance can't be changed.

Set the time zone through the Azure portal


When you enter parameters for a new instance, select a time zone from the list of supported time zones.
Azure Resource Manager template
Specify the timezoneId property in your Resource Manager template to set the time zone during instance
creation.

"properties": {
"administratorLogin": "[parameters('user')]",
"administratorLoginPassword": "[parameters('pwd')]",
"subnetId": "[parameters('subnetId')]",
"storageSizeInGB": 256,
"vCores": 8,
"licenseType": "LicenseIncluded",
"hardwareFamily": "Gen5",
"collation": "Serbian_Cyrillic_100_CS_AS",
"timezoneId": "Central European Standard Time"
},

A list of supported values for the timezoneId property is at the end of this article.
If not specified, the time zone is set to UTC.
Check the time zone of an instance
The CURRENT_TIMEZONE function returns a display name of the time zone of the instance.

Cross-feature considerations
Restore and import
You can restore a backup file or import data to a managed instance from an instance or a server with different
time zone settings. Make sure to do so with caution. Analyze the application behavior and the results of the
queries and reports, just like when you transfer data between two SQL Server instances with different time zone
settings.
Point-in-time restore
When you perform a point-in-time restore, the time to restore to is interpreted as UTC time. This way any
ambiguities due to daylight saving time and its potential changes are avoided.
Auto -failover groups
Using the same time zone across a primary and secondary instance in a failover group isn't enforced, but we
strongly recommend it.

WARNING
We strongly recommend that you use the same time zone for the primary and secondary instance in a failover group.
Because of certain rare use cases keeping the same time zone across primary and secondary instances isn't enforced. It's
important to understand that in the case of manual or automatic failover, the secondary instance will retain its original
time zone.

Limitations
The time zone of the existing managed instance can't be changed. As a workaround, create a new managed
instance with the proper time zone and then either perform a manual backup and restore, or what we
recommend, perform a cross-instance point-in-time restore.
External processes launched from the SQL Server Agent jobs don't observe the time zone of the instance.

List of supported time zones


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Dateline Standard Time (UTC-12:00) International Date Line West

UTC-11 (UTC-11:00) Coordinated Universal Time-11

Aleutian Standard Time (UTC-10:00) Aleutian Islands

Hawaiian Standard Time (UTC-10:00) Hawaii

Marquesas Standard Time (UTC-09:30) Marquesas Islands

Alaskan Standard Time (UTC-09:00) Alaska

UTC-09 (UTC-09:00) Coordinated Universal Time-09


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Pacific Standard Time (Mexico) (UTC-08:00) Baja California

UTC-08 (UTC-08:00) Coordinated Universal Time-08

Pacific Standard Time (UTC-08:00) Pacific Time (US & Canada)

US Mountain Standard Time (UTC-07:00) Arizona

Mountain Standard Time (Mexico) (UTC-07:00) Chihuahua, La Paz, Mazatlan

Mountain Standard Time (UTC-07:00) Mountain Time (US & Canada)

Central America Standard Time (UTC-06:00) Central America

Central Standard Time (UTC-06:00) Central Time (US & Canada)

Easter Island Standard Time (UTC-06:00) Easter Island

Central Standard Time (Mexico) (UTC-06:00) Guadalajara, Mexico City, Monterrey

Canada Central Standard Time (UTC-06:00) Saskatchewan

SA Pacific Standard Time (UTC-05:00) Bogota, Lima, Quito, Rio Branco

Eastern Standard Time (Mexico) (UTC-05:00) Chetumal

Eastern Standard Time (UTC-05:00) Eastern Time (US & Canada)

Haiti Standard Time (UTC-05:00) Haiti

Cuba Standard Time (UTC-05:00) Havana

US Eastern Standard Time (UTC-05:00) Indiana (East)

Turks And Caicos Standard Time (UTC-05:00) Turks and Caicos

Paraguay Standard Time (UTC-04:00) Asuncion

Atlantic Standard Time (UTC-04:00) Atlantic Time (Canada)

Venezuela Standard Time (UTC-04:00) Caracas

Central Brazilian Standard Time (UTC-04:00) Cuiaba

SA Western Standard Time (UTC-04:00) Georgetown, La Paz, Manaus, San Juan

Pacific SA Standard Time (UTC-04:00) Santiago

Newfoundland Standard Time (UTC-03:30) Newfoundland


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Tocantins Standard Time (UTC-03:00) Araguaina

E. South America Standard Time (UTC-03:00) Brasilia

SA Eastern Standard Time (UTC-03:00) Cayenne, Fortaleza

Argentina Standard Time (UTC-03:00) City of Buenos Aires

Greenland Standard Time (UTC-03:00) Greenland

Montevideo Standard Time (UTC-03:00) Montevideo

Magallanes Standard Time (UTC-03:00) Punta Arenas

Saint Pierre Standard Time (UTC-03:00) Saint Pierre and Miquelon

Bahia Standard Time (UTC-03:00) Salvador

UTC-02 (UTC-02:00) Coordinated Universal Time-02

Mid-Atlantic Standard Time (UTC-02:00) Mid-Atlantic - Old

Azores Standard Time (UTC-01:00) Azores

Cape Verde Standard Time (UTC-01:00) Cabo Verde Is.

UTC (UTC) Coordinated Universal Time

GMT Standard Time (UTC+00:00) Dublin, Edinburgh, Lisbon, London

Greenwich Standard Time (UTC+00:00) Monrovia, Reykjavik

W. Europe Standard Time (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm,


Vienna

Central Europe Standard Time (UTC+01:00) Belgrade, Bratislava, Budapest, Ljubljana,


Prague

Romance Standard Time (UTC+01:00) Brussels, Copenhagen, Madrid, Paris

Morocco Standard Time (UTC+01:00) Casablanca

Sao Tome Standard Time (UTC+01:00) Sao Tome

Central European Standard Time (UTC+01:00) Sarajevo, Skopje, Warsaw, Zagreb

W. Central Africa Standard Time (UTC+01:00) West Central Africa

Jordan Standard Time (UTC+02:00) Amman


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

GTB Standard Time (UTC+02:00) Athens, Bucharest

Middle East Standard Time (UTC+02:00) Beirut

Egypt Standard Time (UTC+02:00) Cairo

E. Europe Standard Time (UTC+02:00) Chisinau

Syria Standard Time (UTC+02:00) Damascus

West Bank Standard Time (UTC+02:00) Gaza, Hebron

South Africa Standard Time (UTC+02:00) Harare, Pretoria

FLE Standard Time (UTC+02:00) Helsinki, Kyiv, Riga, Sofia, Tallinn, Vilnius

Israel Standard Time (UTC+02:00) Jerusalem

Kaliningrad Standard Time (UTC+02:00) Kaliningrad

Sudan Standard Time (UTC+02:00) Khartoum

Libya Standard Time (UTC+02:00) Tripoli

Namibia Standard Time (UTC+02:00) Windhoek

Arabic Standard Time (UTC+03:00) Baghdad

Turkey Standard Time (UTC+03:00) Istanbul

Arab Standard Time (UTC+03:00) Kuwait, Riyadh

Belarus Standard Time (UTC+03:00) Minsk

Russian Standard Time (UTC+03:00) Moscow, St. Petersburg

E. Africa Standard Time (UTC+03:00) Nairobi

Iran Standard Time (UTC+03:30) Tehran

Arabian Standard Time (UTC+04:00) Abu Dhabi, Muscat

Astrakhan Standard Time (UTC+04:00) Astrakhan, Ulyanovsk

Azerbaijan Standard Time (UTC+04:00) Baku

Russia Time Zone 3 (UTC+04:00) Izhevsk, Samara

Mauritius Standard Time (UTC+04:00) Port Louis


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Saratov Standard Time (UTC+04:00) Saratov

Georgian Standard Time (UTC+04:00) Tbilisi

Volgograd Standard Time (UTC+04:00) Volgograd

Caucasus Standard Time (UTC+04:00) Yerevan

Afghanistan Standard Time (UTC+04:30) Kabul

West Asia Standard Time (UTC+05:00) Ashgabat, Tashkent

Ekaterinburg Standard Time (UTC+05:00) Ekaterinburg

Pakistan Standard Time (UTC+05:00) Islamabad, Karachi

India Standard Time (UTC+05:30) Chennai, Kolkata, Mumbai, New Delhi

Sri Lanka Standard Time (UTC+05:30) Sri Jayawardenepura

Nepal Standard Time (UTC+05:45) Kathmandu

Central Asia Standard Time (UTC+06:00) Astana

Bangladesh Standard Time (UTC+06:00) Dhaka

Omsk Standard Time (UTC+06:00) Omsk

Myanmar Standard Time (UTC+06:30) Yangon (Rangoon)

SE Asia Standard Time (UTC+07:00) Bangkok, Hanoi, Jakarta

Altai Standard Time (UTC+07:00) Barnaul, Gorno-Altaysk

W. Mongolia Standard Time (UTC+07:00) Hovd

North Asia Standard Time (UTC+07:00) Krasnoyarsk

N. Central Asia Standard Time (UTC+07:00) Novosibirsk

Tomsk Standard Time (UTC+07:00) Tomsk

China Standard Time (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi

North Asia East Standard Time (UTC+08:00) Irkutsk

Singapore Standard Time (UTC+08:00) Kuala Lumpur, Singapore

W. Australia Standard Time (UTC+08:00) Perth


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Taipei Standard Time (UTC+08:00) Taipei

Ulaanbaatar Standard Time (UTC+08:00) Ulaanbaatar

Aus Central W. Standard Time (UTC+08:45) Eucla

Transbaikal Standard Time (UTC+09:00) Chita

Tokyo Standard Time (UTC+09:00) Osaka, Sapporo, Tokyo

North Korea Standard Time (UTC+09:00) Pyongyang

Korea Standard Time (UTC+09:00) Seoul

Yakutsk Standard Time (UTC+09:00) Yakutsk

Cen. Australia Standard Time (UTC+09:30) Adelaide

AUS Central Standard Time (UTC+09:30) Darwin

E. Australia Standard Time (UTC+10:00) Brisbane

AUS Eastern Standard Time (UTC+10:00) Canberra, Melbourne, Sydney

West Pacific Standard Time (UTC+10:00) Guam, Port Moresby

Tasmania Standard Time (UTC+10:00) Hobart

Vladivostok Standard Time (UTC+10:00) Vladivostok

Lord Howe Standard Time (UTC+10:30) Lord Howe Island

Bougainville Standard Time (UTC+11:00) Bougainville Island

Russia Time Zone 10 (UTC+11:00) Chokurdakh

Magadan Standard Time (UTC+11:00) Magadan

Norfolk Standard Time (UTC+11:00) Norfolk Island

Sakhalin Standard Time (UTC+11:00) Sakhalin

Central Pacific Standard Time (UTC+11:00) Solomon Is., New Caledonia

Russia Time Zone 11 (UTC+12:00) Anadyr, Petropavlovsk-Kamchatsky

New Zealand Standard Time (UTC+12:00) Auckland, Wellington

UTC+12 (UTC+12:00) Coordinated Universal Time+12


T IM E Z O N E ID T IM E Z O N E DISP L AY N A M E

Fiji Standard Time (UTC+12:00) Fiji

Kamchatka Standard Time (UTC+12:00) Petropavlovsk-Kamchatsky - Old

Chatham Islands Standard Time (UTC+12:45) Chatham Islands

UTC+13 (UTC+13:00) Coordinated Universal Time+13

Tonga Standard Time (UTC+13:00) Nuku'alofa

Samoa Standard Time (UTC+13:00) Samoa

Line Islands Standard Time (UTC+14:00) Kiritimati Island

See also
CURRENT_TIMEZONE (Transact-SQL)
CURRENT_TIMEZONE_ID (Transact-SQL)
AT TIME ZONE (Transact-SQL)
sys.time_zone_info (Transact-SQL)
Azure SQL Managed Instance connection types
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains how clients connect to Azure SQL Managed Instance depending on the connection type.
Script samples to change connection types are provided below, along with considerations related to changing
the default connectivity settings.

Connection types
Azure SQL Managed Instance supports the following two connection types:
Redirect (recommended): Clients establish connections directly to the node hosting the database. To
enable connectivity using redirect, you must open firewalls and Network Security Groups (NSG) to allow
access on ports 1433, and 11000-11999. Packets go directly to the database, and hence there are latency and
throughput performance improvements using redirect over proxy. Impact of planned maintenance events of
gateway component is also minimized with redirect connection type compared to proxy since connections,
once established, have no dependency on gateway.
Proxy (default): In this mode, all connections are using a proxy gateway component. To enable connectivity,
only port 1433 for private networks and port 3342 for public connection need to be opened. Choosing this
mode can result in higher latency and lower throughput, depending on nature of the workload. Also, planned
maintenance events of gateway component break all live connections in proxy mode. We highly recommend
the redirect connection policy over the proxy connection policy for the lowest latency, highest throughput,
and minimized impact of planned maintenance.

Redirect connection type


In the redirect connection type, after the TCP session is established to the SQL engine, the client session obtains
the destination virtual IP of the virtual cluster node from the load balancer. Subsequent packets flow directly to
the virtual cluster node, bypassing the gateway. The following diagram illustrates this traffic flow.
IMPORTANT
The redirect connection type currently works only for a private endpoint. Regardless of the connection type setting,
connections coming through the public endpoint would be through a proxy.

Proxy connection type


In the proxy connection type, the TCP session is established using the gateway and all subsequent packets flow
through it. The following diagram illustrates this traffic flow.

Changing Connection Type


Using the Por tal: To change the Connection Type using the Azure portal,open the Virtual Network page
and use the Connection type setting to change the connection type and save the changes.
Script to change connection type settings using PowerShell:

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

The following PowerShell script shows how to change the connection type for a managed instance to Redirect .

Install-Module -Name Az
Import-Module Az.Accounts
Import-Module Az.Sql

Connect-AzAccount
# Get your SubscriptionId from the Get-AzSubscription command
Get-AzSubscription
# Use your SubscriptionId in place of {subscription-id} below
Select-AzSubscription -SubscriptionId {subscription-id}
# Replace {rg-name} with the resource group for your managed instance, and replace {mi-name} with the name
of your managed instance
$mi = Get-AzSqlInstance -ResourceGroupName {rg-name} -Name {mi-name}
$mi = $mi | Set-AzSqlInstance -ProxyOverride "Redirect" -force

Next steps
Restore a database to SQL Managed Instance
Learn how to configure a public endpoint on SQL Managed Instance
Learn about SQL Managed Instance connectivity architecture
Create alerts for Azure SQL Managed Instance
using the Azure portal
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article shows you how to set up alerts for databases in Azure SQL Managed Instance Database using the
Azure portal. Alerts can send you an email, call a web hook, execute Azure Function, runbook, call an external
ITSM compatible ticketing system, call you on the phone or send a text message when some metric, such is for
example instance storage size, or CPU usage, reaches a predefined threshold. This article also provides best
practices for setting alert periods.

Overview
You can receive an alert based on monitoring metrics for, or events on, your Azure services.
Metric values - The alert triggers when the value of a specified metric crosses a threshold you assign in
either direction. That is, it triggers both when the condition is first met and then afterwards when that
condition is no longer being met.
You can configure an alert to do the following when it triggers:
Send email notifications to the service administrator and coadministrators
Send email to additional emails that you specify.
Call a phone number with voice prompt
Send text message to a phone number
Call a webhook
Call Azure Function
Call Azure runbook
Call an external ticketing ITSM compatible system
You can configure and get information about alert rules using the Azure portal, PowerShell or the Azure CLI or
Azure Monitor REST API.

Alerting metrics available for managed instance


IMPORTANT
Alerting metrics are available for managed instance only. Alerting metrics for individual databases in managed instance
are not available. Database diagnostics telemetry is on the other hand available in the form of diagnostics logs. Alerts on
diagnostics logs can be setup from within SQL Analytics product using log alert scripts for managed instance.

The following managed instance metrics are available for alerting configuration:

M ET RIC DESC RIP T IO N UN IT O F M EA SURE \ P O SSIB L E VA L UES

Average CPU percentage Average percentage of CPU utilization 0-100 (percent)


in selected time period.
M ET RIC DESC RIP T IO N UN IT O F M EA SURE \ P O SSIB L E VA L UES

IO bytes read IO bytes read in the selected time Bytes


period.

IO bytes written IO bytes written in the selected time Bytes


period.

IO requests count Count of IO requests in the selected Numerical


time period.

Storage space reserved Current max. storage space reserved MB (Megabytes)


for the managed instance. Changes
with resource scaling operation.

Storage space used Storage space used in the selected MB (Megabytes)


period. Changes with storage
consumption by databases and the
instance.

Virtual core count vCores provisioned for the managed 4-80 (vCores)
instance. Changes with resource
scaling operation.

Create an alert rule on a metric with the Azure portal


1. In Azure portal, locate the managed instance you are interested in monitoring, and select it.
2. Select Metrics menu item in the Monitoring section.

3. On the drop-down menu, select one of the metrics you wish to set up your alert on (Storage space used
is shown in the example).
4. Select aggregation period - average, minimum, or maximum reached in the given time period (Avg, Min,
or Max).
5. Select New aler t rule
6. In the Create alert rule pane click on Condition name (Storage space used is shown in the example)
7. On the Configure signal logic pane, define Operator, Aggregation type, and Threshold value
Operator type options are greater than, equal and less than (the threshold value)
Aggregation type options are min, max or average (in the aggregation granularity period)
Threshold value is the alert value which will be evaluated based on the operator and aggregation
criteria
In the example shown in the screenshot, value of 1840876 MB is used representing a threshold value of
1.8 TB. As the operator in the example is set to greater than, the alert will be created if the storage space
consumption on the managed instance goes over 1.8 TB. Note that the threshold value for storage space
metrics must be expressed in MB.
8. Set the evaluation period - aggregation granularity in minutes and frequency of evaluation. The
frequency of evaluation will denote time the alerting system will periodically check if the threshold
condition has been met.
9. Select action group. Action group pane will show up through which you will be able to select an existing,
or create a new action. This action defines that will happen upon triggering an alert (for example, sending
email, calling you on the phone, executing a webhook, Azure function, or a runbook, for example).
To create new action group, select +Create action group

Define how do you want to be alerted: Enter action group name, short name, action name and
select Action Type. The Action Type defines if you will be notified via email, text message, voice call,
or if perhaps webhook, Azure function, runbook will be executed, or ITSM ticket will be created in
your compatible system.

10. Fill in the alert rule details for your records, select the severity type.
Complete creating the alert rule by clicking on Create aler t rule button.
New alert rule will become active within a few minutes and will be triggered based on your settings.

Verifying alerts
NOTE
To supress noisy alerts, see Supression of alerts using action rules.

Upon setting up an alerting rule, verify that you are satisfied with the alerting trigger and its frequency. For the
example shown on this page for setting up an alert on storage space used, if your alerting option was email, you
might receive email such is the one shown below.

The email shows the alert name, details of the threshold and why the alert was triggered helping you to verify
and troubleshoot your alert. You can use See in Azure por tal button to view alert received via email in Azure
portal.

View, suspend, activate, modify and delete existing alert rules


NOTE
Existing alerts need to be managed from Alerts menu from Azure portal dashboard. Existing alerts cannot be modified
from Managed Instance resource blade.

To view, suspend, activate, modify and delete existing alerts:


1. Search for Alerts using Azure portal search. Click on Alerts.

Alternatively, you could also click on Alerts on the Azure navigation bar, if you have it configured.
2. On the Alerts pane, select Manage alert rules.

List of existing alerts will show up. Select an individual existing alert rule to manage it. Existing active
rules can be modified and tuned to your preference. Active rules can also be suspended without being
deleted.

Next steps
Learn about Azure Monitor alerting system, see Overview of alerts in Microsoft Azure
Learn more about metric alerts, see Understand how metric alerts work in Azure Monitor
Learn about configuring a webhook in alerts, see Call a webhook with a classic metric alert
Learn about configuring and managing alerts using PowerShell, see Action rules
Learn about configuring and managing alerts using API, see Azure Monitor REST API reference
Configure Advanced Threat Protection in Azure
SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Advanced Threat Protection for an Azure SQL Managed Instance detects anomalous activities indicating unusual
and potentially harmful attempts to access or exploit databases. Advanced Threat Protection can identify
Potential SQL injection , Access from unusual location or data center , Access from unfamiliar
principal or potentially harmful application , and Brute force SQL credentials - see more details in
Advanced Threat Protection alerts.
You can receive notifications about the detected threats via email notifications or Azure portal
Advanced Threat Protection is part of the Microsoft Defender for SQL offering, which is a unified package for
advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central
Microsoft Defender for SQL portal.

Azure portal
1. Sign in to the Azure portal.
2. Navigate to the configuration page of the instance of SQL Managed Instance you want to protect. Under
Security , select Defender for SQL .
3. In the Microsoft Defender for SQL configuration page:
Turn ON Microsoft Defender for SQL.
Configure the Send aler ts to email address to receive security alerts upon detection of anomalous
database activities.
Select the Azure storage account where anomalous threat audit records are saved.
Select the Advanced Threat Protection types that you would like configured. Learn more about
Advanced Threat Protection alerts.
4. Click Save to save the new or updated Microsoft Defender for SQL policy.
Next steps
Learn more about Advanced Threat Protection.
Learn about managed instances, see What is an Azure SQL Managed Instance.
Learn more about Advanced Threat Protection for Azure SQL Database.
Learn more about SQL Managed Instance auditing.
Learn more about Microsoft Defender for Cloud.
Determine required subnet size and range for Azure
SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance must be deployed within an Azure virtual network. The number of managed
instances that can be deployed in the subnet of a virtual network depends on the size of the subnet (subnet
range).
When you create a managed instance, Azure allocates a number of virtual machines that depend on the tier you
selected during provisioning. Because these virtual machines are associated with your subnet, they require IP
addresses. To ensure high availability during regular operations and service maintenance, Azure might allocate
more virtual machines. The number of required IP addresses in a subnet then becomes larger than the number
of managed instances in that subnet.
By design, a managed instance needs a minimum of 32 IP addresses in a subnet. As a result, you can use a
minimum subnet mask of /27 when defining your subnet IP ranges. We recommend careful planning of subnet
size for your managed instance deployments. Consider the following inputs during planning:
Number of managed instances, including the following instance parameters:
Service tier
Number of vCores
Hardware configuration
Maintenance window
Plans to scale up/down or change the service tier, hardware configuration, or maintenance window

IMPORTANT
A subnet size of 16 IP addresses (subnet mask /28) allows the deployment of a single managed instance inside it. It
should be used only for evaluation or for dev/test scenarios where scaling operations won't be performed.

Determine subnet size


Size your subnet according to your future needs for instance deployment and scaling. The following parameters
can help you in forming a calculation:
Azure uses five IP addresses in the subnet for its own needs.
Each virtual cluster allocates an additional number of addresses.
Each managed instance uses a number of addresses that depend on pricing tier and hardware configuration.
Each scaling request temporarily allocates an additional number of addresses.

IMPORTANT
It's not possible to change the subnet address range if any resource exists in the subnet. Consider using bigger subnets
rather than smaller ones to prevent issues in the future.

GP = General Purpose; BC = Business Critical; VC = virtual cluster


P RIC IN G T IER A Z URE USA GE VC USA GE IN STA N C E USA GE TOTA L

GP 5 6 3 14

BC 5 6 5 16

In the preceding table:


The Total column displays the total number of addresses that are used by a single-deployed instance to the
subnet.
When you add more instances to the subnet, the number of addresses used by the instance increases. The
total number of addresses then also increases.
Addresses represented in the Azure usage column are shared across multiple virtual clusters.
Addresses represented in the VC usage column are shared across instances placed in that virtual cluster.
Also consider the maintenance window feature when you're determining the subnet size, especially when
multiple instances will be deployed inside the same subnet. Specifying a maintenance window for a managed
instance during its creation or afterward means that it must be placed in a virtual cluster with the corresponding
maintenance window. If there is no such virtual cluster in the subnet, a new one must be created first to
accommodate the instance.
The same scenario as for the maintenance window applies for changing the hardware configuration as a virtual
cluster always uses the same hardware. In case of new instance creation or changing the hardware of the
existing instance, if there is no such virtual cluster in the subnet, a new one must be created first to
accommodate the instance.
An update operation typically requires resizing the virtual cluster. When a new create or update request comes,
the SQL Managed Instance service communicates with the compute platform with a request for new nodes that
need to be added. Based on the compute response, the deployment system either expands the existing virtual
cluster or creates a new one. Even if in most cases the operation will be completed within same virtual cluster, a
new one might be created on the compute side.

Update scenarios
During a scaling operation, instances temporarily require additional IP capacity that depends on pricing tier:

P RIC IN G T IER SC EN A RIO A DDIT IO N A L A DDRESSES

GP Scaling vCores 3

GP Scaling storage 0

GP Switching to BC 5

BC Scaling vCores 5

BC Scaling storage 5

BC Switching to GP 3

Calculate the number of IP addresses


We recommend the following formula for calculating the total number of IP addresses. This formula takes into
account the potential creation of a new virtual cluster during a later create request or instance update. It also
takes into account the maintenance window and hardware requirements of virtual clusters.
Formula: 5 + (a * 12) + (b * 16) + (c * 16)
a = number of GP instances
b = number of BC instances
c = number of different maintenance window configurations and hardware configurations
Explanation:
5 = number of IP addresses reserved by Azure
12 addresses per GP instance = 6 for virtual cluster, 3 for managed instance, 3 more for scaling operation
16 addresses per BC instance = 6 for virtual cluster, 5 for managed instance, 5 more for scaling operation
16 addresses as a backup = scenario where new virtual cluster is created
Example:
You plan to have three general-purpose and two business-critical managed instances deployed in the
same subnet. All instances will have same maintenance window configured. That means you need 5 + (3
* 12) + (2 * 16) + (1 * 16) = 89 IP addresses.
Because IP ranges are defined in powers of 2, your subnet requires a minimum IP range of 128 (2^7) for
this deployment. You need to reserve the subnet with a subnet mask of /25.

NOTE
Though it's possible to deploy managed instances to a subnet with a number of IP addresses that's less than the output
of the subnet formula, always consider using bigger subnets instead. Using a bigger subnet can help avoid future issues
stemming from a lack of IP addresses, such as the inability to create additional instances within the subnet or scale
existing instances.

Next steps
For an overview, see What is Azure SQL Managed Instance?.
Learn more about connectivity architecture for SQL Managed Instance.
See how to create a virtual network where you'll deploy SQL Managed Instance.
For DNS issues, see Configure a custom DNS.
Create a virtual network for Azure SQL Managed
Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains how to create a valid virtual network and subnet where you can deploy Azure SQL Managed
Instance.
Azure SQL Managed Instance must be deployed within an Azure virtual network. This deployment enables the
following scenarios:
Secure private IP address
Connecting to SQL Managed Instance directly from an on-premises network
Connecting SQL Managed Instance to a linked server or another on-premises data store
Connecting SQL Managed Instance to Azure resources

NOTE
You should determine the size of the subnet for SQL Managed Instance before you deploy the first instance. You can't
resize the subnet after you put the resources inside.
If you plan to use an existing virtual network, you need to modify that network configuration to accommodate SQL
Managed Instance. For more information, see Modify an existing virtual network for SQL Managed Instance.
After a managed instance is created, moving the managed instance or virtual network to another resource group or
subscription is not supported.

IMPORTANT
You can move the instance to another subnet inside the Vnet.

Create a virtual network


The easiest way to create and configure a virtual network is to use an Azure Resource Manager deployment
template.
1. Sign in to the Azure portal.
2. Select the Deploy to Azure button:

This button opens a form that you can use to configure the network environment where you can deploy
SQL Managed Instance.
NOTE
This Azure Resource Manager template will deploy a virtual network with two subnets. One subnet, called
ManagedInstances , is reserved for SQL Managed Instance and has a preconfigured route table. The other
subnet, called Default , is used for other resources that should access SQL Managed Instance (for example, Azure
Virtual Machines).

3. Configure the network environment. On the following form, you can configure parameters of your
network environment:

You might change the names of the virtual network and subnets, and adjust the IP ranges associated with
your networking resources. After you select the Purchase button, this form will create and configure
your environment. If you don't need two subnets, you can delete the default one.
Next steps
For an overview, see What is SQL Managed Instance?.
Learn about connectivity architecture in SQL Managed Instance.
Learn how to modify an existing virtual network for SQL Managed Instance.
For a tutorial that shows how to create a virtual network, create a managed instance, and restore a database
from a database backup, see Create a managed instance.
For DNS issues, see Configure a custom DNS.
Configure an existing virtual network for Azure SQL
Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance must be deployed within an Azure virtual network and the subnet dedicated for
managed instances only. You can use the existing virtual network and subnet if they're configured according to
the SQL Managed Instance virtual network requirements.
If one of the following cases applies to you, you can validate and modify your network by using the script
explained in this article:
You have a new subnet that's still not configured.
You're not sure that the subnet is aligned with the requirements.
You want to check that the subnet still complies with the network requirements after you made changes.

NOTE
You can create a managed instance only in virtual networks created through the Azure Resource Manager deployment
model. Azure virtual networks created through the classic deployment model are not supported. Calculate subnet size by
following the guidelines in the Determine the size of subnet for SQL Managed Instance article. You can't resize the subnet
after you deploy the resources inside.
After the managed instance is created, you can move the instance to another subnet inside the Vnet, but moving the
instance or VNet to another resource group or subscription is not supported.

Validate and modify an existing virtual network


If you want to create a managed instance inside an existing subnet, we recommend the following PowerShell
script to prepare the subnet:

$scriptUrlBase = 'https://raw.githubusercontent.com/Microsoft/sql-server-
samples/master/samples/manage/azure-sql-db-managed-instance/delegate-subnet'

$parameters = @{
subscriptionId = '<subscriptionId>'
resourceGroupName = '<resourceGroupName>'
virtualNetworkName = '<virtualNetworkName>'
subnetName = '<subnetName>'
}

Invoke-Command -ScriptBlock ([Scriptblock]::Create((iwr ($scriptUrlBase+'/delegateSubnet.ps1?t='+


[DateTime]::Now.Ticks)).Content)) -ArgumentList $parameters

The script prepares the subnet in three steps:


1. Validate: It validates the selected virtual network and subnet for SQL Managed Instance networking
requirements.
2. Confirm: It shows the user a set of changes that need to be made to prepare the subnet for SQL Managed
Instance deployment. It also asks for consent.
3. Prepare: It properly configures the virtual network and subnet.
Next steps
For an overview, see What is SQL Managed Instance?.
For a tutorial that shows how to create a virtual network, create a managed instance, and restore a database
from a database backup, see Create a managed instance.
For DNS issues, see Configuring a custom DNS.
Configure service endpoint policies (Preview) for
Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Virtual Network (VNet) Azure Storage service endpoint policies allow you to filter egress virtual network traffic
to Azure Storage, restricting data transfers to specific storage accounts.
The ability to configure your endpoint policies and associate them with your SQL Managed Instance is currently
in preview.

Key benefits
Configuring Virtual network Azure Storage service endpoint policies for your Azure SQL Managed Instance
provides the following benefits:
Improved security for your Azure SQL Managed Instance traffic to Azure Storage : Endpoint
policies establish a security control that prevents erroneous or malicious exfiltration of business-critical
data. Traffic can be limited to only those storage accounts that are compliant with your data governance
requirements.
Granular control over which storage accounts can be accessed : Service endpoint policies can
permit traffic to storage accounts at a subscription, resource group, and individual storage account level.
Administrators can use service endpoint policies to enforce adherence to the organization's data security
architecture in Azure.
System traffic remains unaffected : Service endpoint policies never obstruct access to storage that is
required for Azure SQL Managed Instance to function. This includes the storage of backups, data files,
transaction log files, and other assets.

IMPORTANT
Service endpoint policies only control traffic that originates from the SQL Managed Instance subnet and terminates in
Azure storage. The policies do not affect, for example, exporting the database to an on-prem BACPAC file, Azure Data
Factory integration, the collection of diagnostic information via Azure Diagnostic Settings, or other mechanisms of data
extraction that do not directly target Azure Storage.

Limitations
Enabling service endpoint policies for your Azure SQL Managed Instance has the following limitations:
While in preview, this feature is available in all Azure regions where SQL Managed Instance is supported
except for China East 2 , China Nor th 2 , Central US EUAP , East US 2 EUAP , US Gov Arizona , US Gov
Texas , US Gov Virginia , and West Central US .
The feature is available only to virtual networks deployed through the Azure Resource Manager deployment
model.
The feature is available only in subnets that have service endpoints for Azure Storage enabled.
Enabling service endpoints for Azure Storage also extends to include paired regions where you deploy the
virtual network to support Read-Access Geo-Redundant storage (RA-GRS) and Geo-Redundant storage
(GRS) traffic.
Assigning a service endpoint policy to a service endpoint upgrades the endpoint from regional to global
scope. In other words, all traffic to Azure Storage will go through the service endpoint regardless of the
region in which the storage account resides.

Prepare storage inventory


Before you begin configuring service endpoint policies on a subnet, compose a list of storage accounts that the
managed instance should have access to in that subnet.
The following is a list of workflows that may contact Azure Storage:
Auditing to Azure Storage.
Performing a copy-only backup to Azure Storage.
Restoring a database from Azure Storage.
Importing data with BULK INSERT or OPENROWSET(BULK ...).
Logging extended events to an Event File target on Azure Storage.
Azure DMS offline migration to Azure SQL Managed Instance.
Log Replay Service migration to Azure SQL Managed Instance.
Synchronizing tables using transactional replication.
Note the account name, resource group, and subscription for any storage account that participates in these, or
any other, workflows that access storage.

Configure policies
You'll first need to create your service endpoint policy, and then associate the policy with the SQL Managed
Instance subnet. Modify the workflow in this section to suit your business needs.

NOTE
SQL Managed Instance subnets require policies to contain the /Services/Azure/ManagedInstance service alias (See
step 5).
Managed instances deployed to a subnet that already contains service endpoint policies will be automatically
upgraded the /Services/Azure/ManagedInstance service alias.

Create a service endpoint policy


To create a service endpoint policy, follow these steps:
1. Sign into the Azure portal.
2. Select + Create a resource .
3. In the search pane, enter service endpoint policy, select Ser vice endpoint policy , and then select
Create .
4. Fill in the following values on the Basics page:
Subscription: Select the subscription for your policy from the drop-down.
Resource group: Select the resource group where your managed instance is located, or select Create
new and fill in the name for a new resource group.
Name: Provide a name for your policy, such as mySEP .
Location: Select the region of the virtual network hosting the managed instance.
5. In Policy definitions , select Add an alias and enter the following information on the Add an alias
pane:
Service Alias: Select /Services/Azure/ManagedInstance.
Select Add to finish adding the service alias.

6. In Policy definitions, select + Add under Resources and enter or select the following information in the
Add a resource pane:
Service: Select Microsoft.Storage .
Scope: Select All accounts in subscription .
Subscription: Select a subscription containing the storage account(s) to permit. Refer to your inventory
of Azure storage accounts created earlier.
Select Add to finish adding the resource.
Repeat this step to add any additional subscriptions.
7. Optional: you may configure tags on the service endpoint policy under Tags .
8. Select Review + Create . Validate the information and select Create . To make further edits, select
Previous .

TIP
First, configure policies to allow access to entire subscriptions. Validate the configuration by ensuring that all workflows
operate normally. Then, optionally, reconfigure policies to allow individual storage accounts, or accounts in a resource
group. To do so, select Single account or All accounts in resource group in the Scope: field instead and fill in the
other fields accordingly.

Associate policy with subnet


After your service endpoint policy is created, associate the policy with your SQL Managed Instance subnet.
To associate your policy, follow these steps:
1. In the All services box in the Azure portal, search for virtual networks. Select Vir tual networks .
2. Locate and select the virtual network hosting your managed instance.
3. Select Subnets and choose the subnet dedicated to your managed instance. Enter the following
information in the subnet pane:
Services: Select Microsoft.Storage . If this field is empty, you need to configure the service endpoint
for Azure Storage on this subnet.
Service endpoint policies: Select any service endpoint policies you want to apply to the SQL Managed
Instance subnet.
4. Select Save to finish configuring the virtual network.

WARNING
If the policies on this subnet do not have the /Services/Azure/ManagedInstance alias, you may see the following error:
Failed to save subnet 'subnet'. Error: 'Found conflicts with NetworkIntentPolicy.
Details: Service endpoint policies on subnet are missing definitions To resolve this, update all the policies on
the subnet to include the /Services/Azure/ManagedInstance alias.

Next steps
Learn more on securing your Azure Storage accounts.
Read about SQL Managed Instance's security capabilities.
Explore the connectivity architecture of SQL Managed Instance.
Move Azure SQL Managed Instance across subnets
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance must be deployed inside a dedicated subnet within an Azure virtual network. The
number of managed instances that can be deployed within the subnet depends on the size of the subnet (subnet
range).
This article teaches you to move your managed instance from one subnet to another, similar to scaling vCores
or changing the instance service tier. SQL Managed Instance is available during the move, except during a short
downtime caused by a failover at the end of the update - typically lasting up to 10 seconds, even if long-running
transactions are interrupted.
Moving the instance to another subnet triggers the following virtual cluster operations:
The destination subnet builds out or resizes the virtual cluster.
The virtual cluster is removed or defragmented in the source subnet.
Before moving your instance to another subnet, consider familiarizing yourself with the following concepts:
Determine required subnet size and range for Azure SQL Managed Instance.
Choose between moving the instance to a new subnet or using an existing subnet.
Use management operations to automatically deploy new managed instances, update instance properties, or
delete instances. It's possible to monitor these management operations.

Requirements and limitations


To deploy a managed instance, or move it to another subnet, the destination subnet must have certain network
requirements.
Subnet readiness
Before you move your managed instance, confirm the subnet is marked as Ready for Managed Instance .
In the Vir tual network UI of the Azure portal, virtual networks that meet the prerequisites for a managed
instance are categorized as Ready for Managed Instance . Virtual networks that have subnets with managed
instances already deployed to them display an icon before the virtual network name. Empty subnets that are
ready for a managed instance do not have an icon.
Subnets that are marked as Other are empty and can be used for a managed instance, but first you need to
fulfill the network requirements. This includes:
delegating to the Microsoft.Sql/managedInstances resource provider
attaching a route table
attaching a network security group
After all requirements are satisfied, the subnet moves from the Other to the Ready for Managed Instance
category and can be used for a managed instance.
Subnets marked as Invalid cannot be used for new or existing managed instances, either because they're
already in use (instances used for instance deployments cannot contain other resources), or the subnet has a
different DNS zone (a cross-subnet instance move limitation).
Depending on the subnet state and designation, the following adjustments may be made to the destination
subnet:
Ready for Managed Instance (contains existing SQL Managed Instance) : No adjustments are made.
These subnets already contain managed instances, and making any change to the subnet could impact
existing instances.
Ready for Managed Instance (empty) : The workflow validates all the required rules in the network
security group and route table, and adds any rules that are necessary but missing. 1

NOTE
1 Custom rules added to the source subnet configuration are not copied to the destination subnet. Any customization of

the source subnet configuration must be replicated manually to the destination subnet. One way to achieve this is by
using the same route table and network security group for the source and destination subnet.

Destination subnet limitations


Consider the following limitations when choosing a destination subnet for an existing instance:
SQL Managed Instance can be moved to the subnet that is either:
1. Empty
2. Specially prepared subnet that retains the DNS zone of SQL Managed Instance that is being moved. This can
be done by populating an empty subnet with new SQL Managed Instances that are created with populated
dnsZonePartner parameter. This parameter as a value accepts the id of SQL Managed Instance see docsand in
this case you can use the instance that would later be moved to the new subnet. (Please note that apart from
this approach there is no other way for you to dictate the DNS zone of SQL Managed Instance since it is
randomly generated. There also, as of now, doesn't exist a way to update the DNS zone of an existing SQL
Managed Instance.)
The DNS zone of the destination subnet must match the DNS zone of the source subnet as changing the DNS
zone of a managed instance is not currently supported.
If you want to migrate a SQL Managed Instance with an auto-failover group, the following prerequisites apply:
The target subnet needs to have the same security rules needed for failover group replication as the source
subnet: Open both inbound and outbound ports 5022 and the range 11000~11999 in the Network Security
Group (NSG) for connections from the other managed instance subnet (the one that holds the failover group
replica) to allow replication traffic between the two instances.
The target subnet can't have an overlapping address range with the subnet that holds the secondary instance
replica of the failover group. For example, if MI1 is in subnet S1, the secondary instance in the failover group
is MI2 in subnet S2, and we want to move MI1 to subnet S3, subnet S3 can't have an overlapping address
range with subnet S2.
To learn more about configuring the network for auto-failover groups, review Enable geo-replication between
managed instances.
Migration from Gen4 hardware
Instances running on Gen4 hardware must be upgraded to newer hardware since Gen4 is being retired.
Upgrading hardware and moving to another subnet can be performed in one operation.

IMPORTANT
Gen4 hardware is being retired and is not available for new deployments, as announced on December 18, 2019.
Customers using Gen4 for Azure SQL Databases, elastic pools, or SQL managed instances should migrate to currently
available hardware, such as standard-series (Gen5), before January 31, 2023.
For more information on Gen4 hardware retirement and migration to current hardware, see our Blog post on Gen4
retirement. Existing Gen4 databases, elastic pools, and SQL managed instances will be migrated automatically to
equivalent standard-series (Gen5) hardware.
Downtime caused by automatic migration will be minimal and similar to downtime during scaling operations within
selected service tier. To avoid unplanned interruptions to workloads, migrate proactively at the time of your choice before
January 31, 2023.

Operation steps
The following table details the operation steps that occur during the instance move operation:

ST EP N A M E ST EP DESC RIP T IO N

Request validation Validates the submitted parameters. If a misconfiguration is


detected, the operation fails with an error.

Virtual cluster resizing / creation Depending on the state of the destination subnet, the
virtual cluster is either created or resized.

New instance startup The SQL process starts on the deployed virtual cluster in the
destination subnet.

Seeding database files / attaching database files Depending on the service tier, either the database is seeded
or the database files are attached.

Preparing failover and failover After data has been seeded or database files reattached, the
system prepares for failover. When everything is ready, the
system performs a failover with a shor t downtime ,
usually less than 10 seconds.

Old SQL instance cleanup Removes the old SQL process from the source virtual cluster.

Virtual cluster deletion If it's the last instance within the source subnet, the final step
deletes the virtual cluster synchronously. Otherwise, the
virtual cluster is asynchronously defragmented.
A detailed explanation of the operation steps can be found in the overview of Azure SQL Managed Instance
management operations

Move the instance


A cross-subnet instance move is part of the instance update operation. Existing instance update API, Azure
PowerShell, and Azure CLI commands have been enhanced with a subnet ID property.
In the Azure portal, use the subnet field on the Networking blade to move the instance to the destination
subnet. When using Azure PowerShell or the Azure CLI, provide a different subnet ID in the update command to
move the instance from an existing subnet to the destination subnet.
For a full reference of instance management commands, see Management API reference for Azure SQL
Managed Instance.

Portal
PowerShell
Azure CLI

The option to choose the instance subnet is located on the Networking blade of the Azure portal. The instance
move operation starts when you select a subnet and save your changes.
The first step of the move operation is to prepare the destination subnet for deployment, which may take several
minutes. Once the subnet is ready, the instance move management operation starts and becomes visible in the
Azure portal.

Monitor instance move operations from the Over view blade of the Azure portal. Select the notification to open
an additional blade containing information about the current step, the total steps, and a button to cancel the
operation.
Next steps
To learn how to create your first managed instance, see Quickstart guide.
For a features and comparison list, see common SQL features.
For more information about VNet configuration, see SQL Managed Instance VNet configuration.
For a quickstart that creates a managed instance and restores a database from a backup file, see Create a
managed instance.
For a tutorial about using Azure Database Migration Service for migration, see SQL Managed Instance
migration using Database Migration Service.
Delete a subnet after deleting an Azure SQL
Managed Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article provides guidelines on how to manually delete a subnet after deleting the last Azure SQL Managed
Instance residing in it. You can delete a virtual network subnet only if there are no resources in the subnet.
SQL Managed Instances are deployed into virtual clusters. Each virtual cluster is associated with a subnet and
automatically deployed together with first instance creation. In the same way, a virtual cluster is
automatically removed together with last instance deletion leaving the subnet empty and ready for removal.

IMPORTANT
There is no need for any manual action on the virtual cluster in order to release the subnet. Once the last virtual cluster is
deleted, you can go and delete the subnet.

There are rare circumstances in which create operation can fail and result with deployed empty virtual cluster.
Additionally, as instance creation can be canceled, it is possible for a virtual cluster to be deployed with instances
residing inside, in a failed to deploy state. Virtual cluster removal will automatically be initiated in these
situations and removed in the background.

IMPORTANT
There are no charges for keeping an empty virtual cluster or instances that have failed to create.
Deletion of a virtual cluster is a long-running operation lasting for about 1.5 hours (see SQL Managed Instance
management operations for up-to-date virtual cluster delete time). The virtual cluster will still be visible in the portal
until this process is completed.
Only one delete operation can be run on the virtual cluster. All subsequent customer-initiated delete requests will
result with an error as delete operation is already in progress.

Delete a virtual cluster from the Azure portal [DEPRECATED]


IMPORTANT
Starting September 1, 2021. all virtual clusters are automatically removed when last instance in the cluster has been
deleted. Manual removal of the virtual cluster is not required anymore.

To delete a virtual cluster by using the Azure portal, search for the virtual cluster resources.

After you locate the virtual cluster you want to delete, select this resource, and select Delete . You're prompted to
confirm the virtual cluster deletion.

Azure portal notifications will show you a confirmation that the request to delete the virtual cluster has been
successfully submitted. The deletion operation itself will last for about 1.5 hours, during which the virtual cluster
will still be visible in portal. Once the process is completed, the virtual cluster will no longer be visible and the
subnet associated with it will be released for reuse.

TIP
If there are no SQL Managed Instances shown in the virtual cluster, and you are unable to delete the virtual cluster,
ensure that you do not have an ongoing instance deployment in progress. This includes started and canceled
deployments that are still in progress. This is because these operations will still use the virtual cluster, locking it from
deletion. Review the Deployments tab of the resource group where the instance was deployed to see any deployments
in progress. In this case, wait for the deployment to complete, then delete the SQL Managed Instance. The virtual cluster
will be synchronously deleted as part of the instance removal.

Delete a virtual cluster by using the API [DEPRECATED]


IMPORTANT
Starting September 1, 2021. all virtual clusters are automatically removed when last instance in the cluster has been
deleted. Manual removal of the virtual cluster is not required anymore.

To delete a virtual cluster through the API, use the URI parameters specified in the virtual clusters delete method.

Next steps
For an overview, see What is Azure SQL Managed Instance?.
Learn about connectivity architecture in SQL Managed Instance.
Learn how to modify an existing virtual network for SQL Managed Instance.
For a tutorial that shows how to create a virtual network, create an Azure SQL Managed Instance, and restore
a database from a database backup, see Create an Azure SQL Managed Instance (portal).
For DNS issues, see Configure a custom DNS.
Configure a custom DNS for Azure SQL Managed
Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Azure SQL Managed Instance must be deployed within an Azure virtual network (VNet). There are a few
scenarios (for example, db mail, linked servers to other SQL Server instances in your cloud or hybrid
environment) that require private host names to be resolved from SQL Managed Instance. In this case, you need
to configure a custom DNS inside Azure.
Because SQL Managed Instance uses the same DNS for its inner workings, configure the custom DNS server so
that it can resolve public domain names.

IMPORTANT
Always use a fully qualified domain name (FQDN) for the mail server, for the SQL Server instance, and for other services,
even if they're within your private DNS zone. For example, use smtp.contoso.com for your mail server because smtp
won't resolve correctly. Creating a linked server or replication that references SQL Server VMs inside the same virtual
network also requires an FQDN and a default DNS suffix. For example, SQLVM.internal.cloudapp.net . For more
information, see Name resolution that uses your own DNS server.

IMPORTANT
Updating virtual network DNS servers won't affect SQL Managed Instance immediately. See how to synchronize virtual
network DNS servers setting on SQL Managed Instance virtual cluster for more details.

Next steps
For an overview, see What is Azure SQL Managed Instance?.
For a tutorial showing you how to create a new managed instance, see Create a managed instance.
For information about configuring a VNet for a managed instance, see VNet configuration for managed
instances.
Synchronize virtual network DNS servers setting on
SQL Managed Instance virtual cluster
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains when and how to synchronize virtual network DNS servers setting on SQL Managed
Instance virtual cluster.

When to synchronize the DNS setting


There are a few scenarios (for example, db mail, linked servers to other SQL Server instances in your cloud or
hybrid environment) that require private host names to be resolved from SQL Managed Instance. In this case,
you need to configure a custom DNS inside Azure. See Configure a custom DNS for Azure SQL Managed
Instance for details.
If this change is implemented after virtual cluster hosting Managed Instance is created you'll need to
synchronize DNS servers setting on the virtual cluster with the virtual network configuration.

IMPORTANT
Synchronizing DNS servers setting will affect all of the Managed Instances hosted in the virtual cluster.

How to synchronize the DNS setting


Azure RBAC permissions required
User synchronizing DNS server configuration will need to have one of the following Azure roles:
Subscription contributor role, or
Custom role with the following permission:
Microsoft.Sql/virtualClusters/updateManagedInstanceDnsServers/action

Use Azure PowerShell


Get virtual network where DNS servers setting has been updated.

$ResourceGroup = 'enter resource group of virtual network'


$VirtualNetworkName = 'enter virtual network name'
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroup $ResourceGroup -Name $VirtualNetworkName

Use PowerShell command Invoke-AzResourceAction to synchronize DNS servers configuration for all the virtual
clusters in the subnet.

Get-AzSqlVirtualCluster `
| where SubnetId -match $virtualNetwork.Id `
| select Id `
| Invoke-AzResourceAction -Action updateManagedInstanceDnsServers -Force

Use the Azure CLI


Get virtual network where DNS servers setting has been updated.
resourceGroup="auto-failover-group"
virtualNetworkName="vnet-fog-eastus"
virtualNetwork=$(az network vnet show -g $resourceGroup -n $virtualNetworkName --query "id" -otsv)

Use Azure CLI command az resource invoke-action to synchronize DNS servers configuration for all the virtual
clusters in the subnet.

az sql virtual-cluster list --query "[? contains(subnetId,'$virtualNetwork')].id" -o tsv \


| az resource invoke-action --action updateManagedInstanceDnsServers --ids @-

Next steps
Learn more about configuring a custom DNS Configure a custom DNS for Azure SQL Managed Instance.
For an overview, see What is Azure SQL Managed Instance?.
Determine the management endpoint IP address -
Azure SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


The Azure SQL Managed Instance virtual cluster contains a management endpoint that Azure uses for
management operations. The management endpoint is protected with a built-in firewall on the network level
and mutual certificate verification on the application level. You can determine the IP address of the management
endpoint, but you can't access this endpoint.
To determine the management IP address, do a DNS lookup on your SQL Managed Instance FQDN:
mi-name.zone_id.database.windows.net . This will return a DNS entry that's like
trx.region-a.worker.vnet.database.windows.net . You can then do a DNS lookup on this FQDN with ".vnet"
removed. This will return the management IP address.
This PowerShell code will do it all for you if you replace <MI FQDN> with the DNS entry of SQL Managed
Instance: mi-name.zone_id.database.windows.net :

$MIFQDN = "<MI FQDN>"


resolve-dnsname $MIFQDN | select -first 1 | %{ resolve-dnsname $_.NameHost.Replace(".vnet","")}

For more information about SQL Managed Instance and connectivity, see Azure SQL Managed Instance
connectivity architecture.
Verify the Azure SQL Managed Instance built-in
firewall
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


The Azure SQL Managed Instance mandatory inbound security rules require management ports 9000, 9003,
1438, 1440, and 1452 to be open from Any source on the Network Security Group (NSG) that protects SQL
Managed Instance. Although these ports are open at the NSG level, they are protected at the network level by
the built-in firewall.

Verify firewall
To verify these ports, use any security scanner tool to test these ports. The following screenshot shows how to
use one of these tools.

Next steps
For more information about SQL Managed Instance and connectivity, see Azure SQL Managed Instance
connectivity architecture.
Migrate databases from SQL Server to SQL
Managed Instance by using Log Replay Service
(Preview)
7/12/2022 • 23 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains how to manually configure database migration from SQL Server 2008-2019 to Azure SQL
Managed Instance by using Log Replay Service (LRS), currently in public preview. LRS is a free of charge cloud
service enabled for Azure SQL Managed Instance based on SQL Server log-shipping technology.
Azure Database Migration Service and LRS use the same underlying migration technology and APIs. LRS further
enables complex custom migrations and hybrid architectures between on-premises SQL Server and SQL
Managed Instance.

When to use Log Replay Service


When you can't use Azure Database Migration Service for migration, you can use LRS directly with PowerShell,
Azure CLI cmdlets, or APIs to manually build and orchestrate database migrations to SQL Managed Instance.
Consider using LRS in the following cases:
You need more control for your database migration project.
There's little tolerance for downtime during migration cutover.
The Database Migration Service executable file can't be installed to your environment.
The Database Migration Service executable file doesn't have file access to your database backups.
No access to the host OS is available, or there are no administrator privileges.
You can't open network ports from your environment to Azure.
Network throttling, or proxy blocking issues exist in your environment.
Backups are stored directly to Azure Blob Storage through the TO URL option.
You need to use differential backups.

NOTE
We recommend automating the migration of databases from SQL Server to SQL Managed Instance by using Database
Migration Service. Consider using LRS to orchestrate migrations when Database Migration Service doesn't fully
support your scenarios.
LRS is the only method to restore differential backups on managed instance. It isn't possible to manually restore
differential backups on managed instance, nor to manually set the NORECOVERY mode using T-SQL.

How it works
Building a custom solution to migrate databases to the cloud with LRS requires several orchestration steps, as
shown in the diagram and a table later in this section.
Migration consists of making database backups on SQL Server with CHECKSUM enabled, and copying backup
files to Azure Blob Storage. Full, log, and differential backups are supported. LRS cloud service is used to restore
backup files from Azure Blob Storage to SQL Managed Instance. Blob Storage serves as an intermediary storage
between SQL Server and SQL Managed Instance.
LRS monitors Blob Storage for any new differential or log backups added after the full backup has been
restored. LRS then automatically restores these new files. You can use the service to monitor the progress of
backup files being restored to SQL Managed Instance, and stop the process if necessary.
LRS doesn't require a specific naming convention for backup files. It scans all files placed on Azure Blob Storage
and constructs the backup chain from reading the file headers only. Databases are in a restoring state during
the migration process. Databases are restored in NORECOVERY mode, so they can't be used for read or write
workloads until the migration process completes.
Autocomplete versus Continuous mode migration
You can start LRS in either autocomplete or continuous mode.
Use autocomplete mode in cases when you have the entire backup chain generated in advance, and when you
don't plan to add any more files once the migration has been started. This migration mode is recommended for
passive workloads that don't require data catch-up. Upload all backup files to the Azure Blob Storage, and start
the autocomplete mode migration. The migration will complete automatically when the last of the specified
backup files have been restored. Migrated database will become available for read and write access on SQL
Managed Instance.
In case that you plan to keep adding new backup files while migration is in progress, use continuous mode. This
mode is recommended for active workloads requiring data catch-up. Upload the currently available backup
chain to Azure Blob Storage, start the migration in continuous mode, and keep adding new backup files from
your workload as needed. The system will periodically scan Azure Blob Storage folder and restore any new log
or differential backup files found. When you're ready to cutover, stop the workload on your SQL Server, generate
and upload the last backup file. Ensure that the last backup file has restored by watching that the final log-tail
backup is shown as restored on SQL Managed Instance. Then, initiate manual cutover. The final cutover step
makes the database come online and available for read and write access on SQL Managed Instance.
After LRS is stopped, either automatically through autocomplete, or manually through cutover, you can't resume
the restore process for a database that was brought online on SQL Managed Instance. For example, once
migration completes, you're no longer able to restore more differential backups for an online database. To
restore more backup files after migration completes, you need to delete the database from the managed
instance and restart the migration from the beginning.
Migration workflow
Typical migration workflow is shown in the image below, and steps outlined in the table.
Autocomplete mode needs to be used only when all backup chain files are available in advance. This mode is
recommended for passive workloads for which no data catch-up is required.
Continuous mode migration needs to be used when you don't have the entire backup chain in advance, and
when you plan to add new backup files once the migration is in progress. This mode is recommended for active
workloads for which data catch-up is required.
O P ERAT IO N DETA IL S

1. Copy database backups from SQL Ser ver to Blob Copy full, differential, and log backups from SQL Server to a
Storage . Blob Storage container by using AzCopy or Azure Storage
Explorer.

Use any file names. LRS doesn't require a specific file-naming


convention.

Use a separate folder for each database when migrating


several databases.

2. Star t LRS in the cloud . You can start the service with PowerShell (start-
azsqlinstancedatabaselogreplay) or the Azure CLI
(az_sql_midb_log_replay_start cmdlets). Choose between
autocomplete or continuous migration modes.

Start LRS separately for each database that points to a


backup folder on Blob Storage.

After the service starts, it will take backups from the Blob
Storage container and start restoring them to SQL Managed
Instance.

When started in autocomplete mode, LRS restores all


backups until the specified last backup file. All backup files
must be uploaded in advance, and it isn't possible to add
any new backup files while migration is in progress. This
mode is recommended for passive workloads for which no
data catch-up is required.

When started in continuous mode, LRS restores all the


backups initially uploaded and then watches for any new files
uploaded to the folder. The service will continuously apply
logs based on the log sequence number (LSN) chain until it's
stopped manually. This mode is recommended for active
workloads for which data catch-up is required.

2.1. Monitor the operation's progress . You can monitor progress of the restore operation with
PowerShell (get-azsqlinstancedatabaselogreplay) or the
Azure CLI (az_sql_midb_log_replay_show cmdlets).
O P ERAT IO N DETA IL S

2.2. Stop the operation if required (optional) . If you need to stop the migration process, use PowerShell
(stop-azsqlinstancedatabaselogreplay) or the Azure CLI
(az_sql_midb_log_replay_stop).

Stopping the operation deletes the database that you're


restoring to SQL Managed Instance. After you stop an
operation, you can't resume LRS for a database. You need to
restart the migration process from the beginning.

3. Cut over to the cloud when you're ready . If LRS was started in autocomplete mode, the migration will
automatically complete once the specified last backup file
has been restored.

If LRS was started in continuous mode, stop the application


and workload. Take the last log-tail backup and upload it to
Azure Blob Storage. Ensure that the last log-tail backup has
been restored on managed instance. Complete the cutover
by initiating an LRS complete operation with PowerShell
(complete-azsqlinstancedatabaselogreplay) or the Azure CLI
az_sql_midb_log_replay_complete. This operation stops LRS
and brings the database online for read and write workloads
on SQL Managed Instance.

Repoint the application connection string from SQL Server to


SQL Managed Instance. You'll need to orchestrate this step
yourself, either through a manual connection string change
in your application, or automatically (for example, if your
application can read the connection string from a property,
or a database).

Migrating multiple databases


LRS supports migration of multiple databases simultaneously. Backup files for each database must be stored on
Blob Storage in a separate folder, with a flat-file structure. If you're migrating several databases, you need to:
Place backup files for each database in a separate folder on Azure Blob Storage in a flat-file structure. For
example, use separate database folders: bolbcontainer/database1/files , blobcontainer/database2/files , etc.
Don't use nested folders inside database folders as this structure isn't supported. For example, don't use
subfolders: blobcontainer/database1/subfolder/files .
Start LRS separately for each database.
Specify different URI paths to separate database folders on Azure Blob Storage.

Getting started
Consider the requirements in this section to get started with using LRS to migrate.
SQL Server
Make sure you have the following requirements for SQL Server:
SQL Server versions from 2008 to 2022
Full backup of databases (one or multiple files)
Differential backup (one or multiple files)
Log backup (not split for a transaction log file)
CHECKSUM enabled for backups (mandatory)

Azure
Make sure you have the following requirements for Azure:
PowerShell Az.SQL module version 2.16.0 or later (installed or accessed through Azure Cloud Shell)
Azure CLI version 2.19.0 or later (installed)
Azure Blob Storage container provisioned
Shared access signature (SAS) security token with read and list permissions generated for the Blob Storage
container
Azure RBAC permissions
Running LRS through the provided clients requires one of the following Azure roles:
Subscription Owner role
SQL Managed Instance Contributor role
Custom role with the following permission: Microsoft.Sql/managedInstances/databases/*

Requirements
Ensure the following requirements are met:
Use the full recovery model on SQL Server (mandatory).
Use CHECKSUM for backups on SQL Server (mandatory).
Place backup files for an individual database inside a separate folder in a flat-file structure (mandatory).
Nested folders inside database folders aren't supported.
Plan to complete the migration within 36 hours after you start LRS (mandatory). This time window is a grace
period during which system-managed software patches are postponed.

Best practices
We recommend the following best practices:
Run Data Migration Assistant to validate that your databases are ready to be migrated to SQL Managed
Instance.
Split full and differential backups into multiple files, instead of using a single file.
Enable backup compression to help the network transfer speeds.
Use Cloud Shell to run PowerShell or CLI scripts, because it will always be updated to the latest cmdlets
released.

IMPORTANT
You can't use databases being restored through LRS until the migration process completes.
LRS doesn't support read-only access to databases during the migration.
After the migration completes, the migration process is finalized and can't be resumed with additional differential
backups.

Steps to migrate
To migrate using LRS, follow the steps in this section.
Make database backups on SQL Server
You can make database backups on SQL Server by using either of the following options:
Back up to the local disk storage, and then upload files to Azure Blob Storage, if your environment restricts
direct backups to Blob Storage.
Back up directly to Blob Storage with the TO URL option in Transact-SQL (T-SQL), if your environment and
security procedures allow it.
Set databases that you want to migrate to the full recovery model to allow log backups.

-- To permit log backups, before the full database backup, modify the database to use the full recovery
USE master
ALTER DATABASE SampleDB
SET RECOVERY FULL
GO

To manually make full, differential, and log backups of your database to local storage, use the following sample
T-SQL scripts. Ensure the CHECKSUM option is enabled, as it's mandatory for LRS.
The following example takes a full database backup to the local disk:

-- Take full database backup to local disk


BACKUP DATABASE [SampleDB]
TO DISK='C:\BACKUP\SampleDB_full.bak'
WITH INIT, COMPRESSION, CHECKSUM
GO

The following example takes a differential backup to the local disk:

-- Take differential database backup to local disk


BACKUP DATABASE [SampleDB]
TO DISK='C:\BACKUP\SampleDB_diff.bak'
WITH DIFFERENTIAL, COMPRESSION, CHECKSUM
GO

The following example takes a transaction log backup to the local disk:

-- Take transactional log backup to local disk


BACKUP LOG [SampleDB]
TO DISK='C:\BACKUP\SampleDB_log.trn'
WITH COMPRESSION, CHECKSUM
GO

Create a storage account


Azure Blob Storage is used as intermediary storage for backup files between SQL Server and SQL Managed
Instance. To create a new storage account and a blob container inside the storage account, follow these steps:
1. Create a storage account.
2. Crete a blob container inside the storage account.
Copy backups from SQL Server to Blob Storage
When migrating databases to a managed instance by using LRS, you can use the following approaches to
upload backups to Blob Storage:
SQL Server native BACKUP TO URL
AzCopy or Azure Storage Explorer to upload backups to a blob container
Storage Explorer in the Azure portal
NOTE
To migrate multiple databases using the same Azure Blob Storage container, place all backup files of an individual database
into a separate folder inside the container. Use flat-file structure for each database folder, as nested folders aren't
supported.

Make backups from SQL Server directly to Blob Storage


If your corporate and network policies allow it, take backups from SQL Server directly to Blob Storage by using
the SQL Server native BACKUP TO URL option. If you can use this option, you don't need to take backups to local
storage and upload them to Blob Storage.
As the first step, this operation requires you to generate an SAS authentication token for Blob Storage, and then
import the token to SQL Server. The second step is to make backups with the TO URL option in T-SQL. Ensure
that all backups are made with the CHEKSUM option enabled.
For reference, the following sample code makes backups to Blob Storage. This example doesn't include
instructions on how to import the SAS token. You can find detailed instructions, including how to generate and
import the SAS token to SQL Server, in the tutorial Use Azure Blob Storage with SQL Server.
The following example takes a full database backup to a URL:

-- Take a full database backup to a URL


BACKUP DATABASE [SampleDB]
TO URL =
'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_full.bak'
WITH INIT, COMPRESSION, CHECKSUM
GO

The following example takes a differential database backup to a URL:

-- Take a differential database backup to a URL


BACKUP DATABASE [SampleDB]
TO URL =
'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_diff.bak'
WITH DIFFERENTIAL, COMPRESSION, CHECKSUM
GO

The following example takes a transaction log backup to a URL:

-- Take a transactional log backup to a URL


BACKUP LOG [SampleDB]
TO URL =
'https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>/SampleDB_log.trn'
WITH COMPRESSION, CHECKSUM

Migration of multiple databases


If migrating multiple databases using the same Azure Blob Storage container, you must place backup files for
different databases in separate folders inside the container. All backup files for a single database must be placed
in a flat-file structure inside a database folder, and the folders can't be nested, as it's not supported.
Below is an example of folder structure inside Azure Blob Storage container required to migrate multiple
databases using LRS.
-- Place all backup files for database 1 in a separate "database1" folder in a flat-file structure.
-- Don't use nested folders inside database1 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database1>/<all-database1-backup-
files>

-- Place all backup files for database 2 in a separate "database2" folder in a flat-file structure.
-- Don't use nested folders inside database2 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database2>/<all-database2-backup-
files>

-- Place all backup files for database 3 in a separate "database3" folder in a flat-file structure.
-- Don't use nested folders inside database3 folder.
https://<mystorageaccountname>.blob.core.windows.net/<containername>/<database3>/<all-database3-backup-
files>

Generate a Blob Storage SAS authentication token for LRS


Azure Blob Storage is used as intermediary storage for backup files between SQL Server and SQL Managed
Instance. Generate an SAS authentication token for LRS with only list and read permissions. The token enables
LRS to access Blob Storage and uses the backup files to restore them to SQL Managed Instance.
Follow these steps to generate the token:
1. Open Storage Explorer from the Azure portal.
2. Expand Blob Containers .
3. Right-click the blob container and select Get Shared Access Signature .

4. Select the time frame for token expiration. Ensure the token is valid during your migration.
5. Select the time zone for the token: UTC or your local time.

IMPORTANT
The time zone of the token and your managed instance might mismatch. Ensure that the SAS token has the
appropriate time validity, taking time zones into consideration. To account for time zone differences, set the
validity time frame FROM well before your migration window starts, and the TO time frame well after you expect
your migration to complete.

6. Select Read and List permissions only.

IMPORTANT
Don't select any other permissions. If you do, LRS won't start. This security requirement is by-design.

7. Select Create .
The SAS authentication is generated with the time validity that you specified. You need the URI version of the
token, as shown in the following screenshot.

NOTE
Using SAS tokens created with permissions set through defining a stored access policy isn't supported at this time. Follow
the instructions in this article to manually specify Read and List permissions for the SAS token.

Copy parameters from the SAS token


Before you use the SAS token to start LRS, you need to understand its structure. The URI of the generated SAS
token consists of two parts separated with a question mark ( ? ), as shown in this example:

The first part, starting with https:// until the question mark ( ? ), is used for the StorageContainerURI
parameter that's fed as the input to LRS. It gives LRS information about the folder where the database backup
files are stored.
The second part, starting after the question mark ( ? ) and going all the way until the end of the string, is the
StorageContainerSasToken parameter. This part is the actual signed authentication token, which is valid during
the specified time. This part doesn't necessarily need to start with sp= as shown in the example. Your case may
differ.
Copy the parameters as follows:
1. Copy the first part of the token, starting from https:// all the way until the question mark ( ? ). Use it as
the StorageContainerUri parameter in PowerShell or the Azure CLI when starting LRS.
2. Copy the second part of the token, starting after the question mark ( ? ) all the way until the end of the
string. Use it as the StorageContainerSasToken parameter in PowerShell or the Azure CLI when starting
LRS.

NOTE
Don't include the question mark ( ? ) when you copy either part of the token.

Log in to Azure and select a subscription


Use the following PowerShell cmdlet to log in to Azure:

Login-AzAccount

Select the appropriate subscription where your managed instance resides by using the following PowerShell
cmdlet:

Select-AzSubscription -SubscriptionId <subscription ID>

Start the migration


You start the migration by starting LRS. You can start the service in either autocomplete or continuous mode.
When you use autocomplete mode, the migration completes automatically when the last of the specified backup
files have been restored. This option requires the entire backup chain to be available in advance, and uploaded
to Azure Blob Storage. It doesn't allow adding new backup files while migration is in progress. This option
requires the start command to specify the filename of the last backup file. This mode is recommended for
passive workloads for which data catch-up isn't required.
When you use continuous mode, the service continuously scans Azure Blob Storage folder and restores any new
backup files that keep getting added while migration is in progress. The migration completes only after the
manual cutover has been requested. Continuous mode migration needs to be used when you don't have the
entire backup chain in advance, and when you plan to add new backup files once the migration is in progress.
This mode is recommended for active workloads for which data catch-up is required.

NOTE
When migrating multiple databases, LRS must be started separately for each database pointing to the full URI path of
Azure Blob storage container and the individual database folder.

IMPORTANT
After you start LRS, any system-managed software patches are halted for 36 hours. After this window, the next
automated software patch will automatically stop LRS. If that happens, you can't resume migration and need to restart it
from the beginning.

Start LRS in autocomplete mode


Ensure that the entire backup chain has been uploaded to Azure Blob Storage. This option doesn't allow new
backup files to be added once the migration is in progress.
To start LRS in autocomplete mode, use PowerShell or Azure CLI commands. Specify the last backup file name
by using the -LastBackupName parameter. After restore of the last specified backup file has completed, the
service automatically initiates a cutover.
The following PowerShell example starts LRS in autocomplete mode:

Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `


-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `
-Collation "SQL_Latin1_General_CP1_CI_AS" `
-StorageContainerUri
"https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>" `
-StorageContainerSasToken "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-
25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D" `
-AutoCompleteRestore `
-LastBackupName "last_backup.bak"

The following Azure CLI example starts LRS in autocomplete mode:

az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb -a --last-bn "backup.bak"
--storage-uri "https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>"
--storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-
25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"

IMPORTANT
Ensure that the entire backup chain has been uploaded to Azure Blob Storage prior to starting the migration in
autocomplete mode. This mode doesn't allow new backup files to be added once the migration is in progress.

Start LRS in continuous mode


Ensure that you've uploaded your initial backup chain to Azure Blob Storage.
The following PowerShell example starts LRS in continuous mode:
Start-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `
-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `
-Collation "SQL_Latin1_General_CP1_CI_AS" -StorageContainerUri
"https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>" `
-StorageContainerSasToken "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-
25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"

The following Azure CLI example starts LRS in continuous mode:

az sql midb log-replay start -g mygroup --mi myinstance -n mymanageddb


--storage-uri "https://<mystorageaccountname>.blob.core.windows.net/<containername>/<databasefolder>"
--storage-sas "sv=2019-02-02&ss=b&srt=sco&sp=rl&se=2023-12-02T00:09:14Z&st=2019-11-
25T16:09:14Z&spr=https&sig=92kAe4QYmXaht%2Fgjocqwerqwer41s%3D"

IMPORTANT
Once LRS has been started in continuous mode, you'll be able to add new log and differential backups to Azure Blob
Storage until the manual cutover. Once manual cutover has been initiated, no additional differential files can be added, nor
restored.

Scripting the migration job


PowerShell and CLI clients that start LRS in continuous mode are synchronous. in this mode, PowerShell and CLI
will wait for the API response to report on success or failure to start the job.
During this wait, the command won't return control to the command prompt. If you're scripting the migration
experience, and you need the LRS start command to give back control immediately to continue with rest of the
script, you can run PowerShell as a background job with the -AsJob switch. For example:

$lrsjob = Start-AzSqlInstanceDatabaseLogReplay <required parameters> -AsJob

When you start a background job, a job object returns immediately, even if the job takes an extended time to
complete. You can continue to work in the session without interruption while the job runs. For details on running
PowerShell as a background job, see the PowerShell Start-Job documentation.
Similarly, to start an Azure CLI command on Linux as a background process, use the ampersand ( & ) at the end
of the LRS start command:

az sql midb log-replay start <required parameters> &

Monitor migration progress


To monitor migration progress through PowerShell, use the following command:

Get-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `


-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName"

To monitor migration progress through the Azure CLI, use the following command:

az sql midb log-replay show -g mygroup --mi myinstance -n mymanageddb


Stop the migration (optional)
If you need to stop the migration, use PowerShell or the Azure CLI. Stopping the migration deletes the restoring
database on SQL Managed Instance, so resuming the migration won't be possible.
To stop the migration process through PowerShell, use the following command:

Stop-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `


-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName"

To stop the migration process through the Azure CLI, use the following command:

az sql midb log-replay stop -g mygroup --mi myinstance -n mymanageddb

Complete the migration (continuous mode)


If you started LRS in continuous mode, ensure that your application, and SQL Server workload have been
stopped to prevent any new backup files from being generated. Ensure that the last backup from SQL Server
has been uploaded to Azure Blob Storage. Monitor the restore progress on managed instance, ensuring that the
last log-tail backup has been restored.
Once the last log-tail backup has been restored on managed instance, initiate the manual cutover to complete
the migration. After the cutover has completed, the database will become available for read and write access on
managed instance.
To complete the migration process in LRS continuous mode through PowerShell, use the following command:

Complete-AzSqlInstanceDatabaseLogReplay -ResourceGroupName "ResourceGroup01" `


-InstanceName "ManagedInstance01" `
-Name "ManagedDatabaseName" `
-LastBackupName "last_backup.bak"

To complete the migration process in LRS continuous mode through the Azure CLI, use the following command:

az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last-backup-name "backup.bak"

Limitations
Consider the following limitations of LRS:
During the migration process, databases being migrated can't be used for read-only access on SQL Managed
Instance.
System-managed software patches are blocked for 36 hours once the LRS has been started. After this time
window expires, the next software maintenance update stops LRS. You'll need to restart the LRS migration
from the beginning.
LRS requires databases on SQL Server to be backed up with the CHECKSUM option enabled.
The SAS token that LRS uses must be generated for the entire Azure Blob Storage container, and it must have
Read and List permissions only. For example, if you grant Read , List and Write permissions, LRS won't be
able to start because of the extra Write permission.
Using SAS tokens created with permissions set through defining a stored access policy isn't supported.
Follow the instructions in this article to manually specify Read and List permissions for the SAS token.
Backup files containing % and $ characters in the file name can't be consumed by LRS. Consider renaming
such file names.
Backup files for different databases must be placed in separate folders on Blob Storage in a flat-file structure.
Nested folders inside individual database folders aren't supported.
If using autocomplete mode, the entire backup chain needs to be available in advance on Azure Blob Storage.
It isn't possible to add new backup files in autocomplete mode. Use continuous mode if you need to add new
backup files while migration is in progress.
LRS must be started separately for each database pointing to the full URI path containing an individual
database folder.
LRS can support up to 100 simultaneous restore processes per single managed instance.

NOTE
If you require database to be R/O accessible during the migration, a faster minimum downtime migration, and if you
require migration window larger than 36 hours, please consider the link feature for Managed Instance as a recommended
migration solution in these cases.

Troubleshooting
After you start LRS, use the monitoring cmdlet (PowerShell: get-azsqlinstancedatabaselogreplay or Azure CLI:
az_sql_midb_log_replay_show ) to see the status of the operation. If LRS fails to start after some time and you get
an error, check for the most common issues:
Does an existing database on SQL Managed Instance have the same name as the one you're trying to
migrate from SQL Server? Resolve this conflict by renaming one of the databases.
Was the database backup on SQL Server made via the CHECKSUM option?
Are the permissions granted for the SAS token Read and List only?
Did you copy the SAS token for LRS after the question mark ( ? ), with content starting like this:
sv=2020-02-10... ?
Is the SAS token validity time applicable for the time window of starting and completing the migration?
There might be mismatches due to the different time zones used for SQL Managed Instance and the SAS
token. Try regenerating the SAS token and extending the token validity of the time window before and after
the current date.
Are the database name, resource group name, and managed instance name spelled correctly?
If you started LRS in autocomplete mode, was a valid filename for the last backup file specified?

Next steps
Learn more about migrating to SQL Managed Instance using the link feature.
Learn more about migrating from SQL Server to SQL Managed instance.
Learn more about differences between SQL Server and SQL Managed Instance.
Learn more about best practices to cost and size workloads migrated to Azure.
Migrate a certificate of a TDE-protected database
to Azure SQL Managed Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


When you're migrating a database protected by Transparent Data Encryption (TDE) to Azure SQL Managed
Instance using the native restore option, the corresponding certificate from the SQL Server instance needs to be
migrated before database restore. This article walks you through the process of manual migration of the
certificate to Azure SQL Managed Instance:
Export the certificate to a Personal Information Exchange (.pfx) file
Extract the certificate from a file to a base-64 string
Upload it using a PowerShell cmdlet
For an alternative option using a fully managed service for seamless migration of both a TDE-protected
database and a corresponding certificate, see How to migrate your on-premises database to Azure SQL
Managed Instance using Azure Database Migration Service.

IMPORTANT
A migrated certificate is used for restore of the TDE-protected database only. Soon after restore is done, the migrated
certificate gets replaced by a different protector, either a service-managed certificate or an asymmetric key from the key
vault, depending on the type of the TDE you set on the instance.

Prerequisites
To complete the steps in this article, you need the following prerequisites:
Pvk2Pfx command-line tool installed on the on-premises server or other computer with access to the
certificate exported as a file. The Pvk2Pfx tool is part of the Enterprise Windows Driver Kit, a self-contained
command-line environment.
Windows PowerShell version 5.0 or higher installed.

PowerShell
Azure CLI

Make sure you have the following:


Azure PowerShell module installed and updated.
Az.Sql module.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
IMPORTANT
The PowerShell Azure Resource Manager module is still supported by Azure SQL Managed Instance, but all future
development is for the Az.Sql module. For these cmdlets, see AzureRM.Sql. The arguments for the commands in the Az
module and in the AzureRM modules are substantially identical.

Run the following commands in PowerShell to install/update the module:

Install-Module -Name Az.Sql


Update-Module -Name Az.Sql

Export the TDE certificate to a .pfx file


The certificate can be exported directly from the source SQL Server instance, or from the certificate store if it's
being kept there.
Export the certificate from the source SQL Server instance
Use the following steps to export the certificate with SQL Server Management Studio and convert it into .pfx
format. The generic names TDE_Cert and full_path are being used for certificate and file names and paths
through the steps. They should be replaced with the actual names.
1. In SSMS, open a new query window and connect to the source SQL Server instance.
2. Use the following script to list TDE-protected databases and get the name of the certificate protecting
encryption of the database to be migrated:

USE master
GO
SELECT db.name as [database_name], cer.name as [certificate_name]
FROM sys.dm_database_encryption_keys dek
LEFT JOIN sys.certificates cer
ON dek.encryptor_thumbprint = cer.thumbprint
INNER JOIN sys.databases db
ON dek.database_id = db.database_id
WHERE dek.encryption_state = 3

3. Execute the following script to export the certificate to a pair of files (.cer and .pvk), keeping the public
and private key information:
USE master
GO
BACKUP CERTIFICATE TDE_Cert
TO FILE = 'c:\full_path\TDE_Cert.cer'
WITH PRIVATE KEY (
FILE = 'c:\full_path\TDE_Cert.pvk',
ENCRYPTION BY PASSWORD = '<SomeStrongPassword>'
)

4. Use the PowerShell console to copy certificate information from a pair of newly created files to a .pfx file,
using the Pvk2Pfx tool:

.\pvk2pfx -pvk c:/full_path/TDE_Cert.pvk -pi "<SomeStrongPassword>" -spc c:/full_path/TDE_Cert.cer -


pfx c:/full_path/TDE_Cert.pfx

Export the certificate from a certificate store


If the certificate is kept in the SQL Server local machine certificate store, it can be exported using the following
steps:
1. Open the PowerShell console and execute the following command to open the Certificates snap-in of
Microsoft Management Console:

certlm

2. In the Certificates MMC snap-in, expand the path Personal > Certificates to see the list of certificates.
3. Right-click the certificate and click Expor t .
4. Follow the wizard to export the certificate and private key to a .pfx format.

Upload the certificate to Azure SQL Managed Instance using an


Azure PowerShell cmdlet
PowerShell
Azure CLI

1. Start with preparation steps in PowerShell:


# import the module into the PowerShell session
Import-Module Az
# connect to Azure with an interactive dialog for sign-in
Connect-AzAccount
# list subscriptions available and copy id of the subscription target the managed instance belongs to
Get-AzSubscription
# set subscription for the session
Select-AzSubscription <subscriptionId>

2. Once all preparation steps are done, run the following commands to upload base-64 encoded certificate
to the target managed instance:

# If you are using PowerShell 6.0 or higher, run this command:


$fileContentBytes = Get-Content 'C:/full_path/TDE_Cert.pfx' -AsByteStream
# If you are using PowerShell 5.x, uncomment and run this command instead of the one above:
# $fileContentBytes = Get-Content 'C:/full_path/TDE_Cert.pfx' -Encoding Byte
$base64EncodedCert = [System.Convert]::ToBase64String($fileContentBytes)
$securePrivateBlob = $base64EncodedCert | ConvertTo-SecureString -AsPlainText -Force
$password = "<password>"
$securePassword = $password | ConvertTo-SecureString -AsPlainText -Force
Add-AzSqlManagedInstanceTransparentDataEncryptionCertificate -ResourceGroupName "<resourceGroupName>"
`
-ManagedInstanceName "<managedInstanceName>" -PrivateBlob $securePrivateBlob -Password
$securePassword

The certificate is now available to the specified managed instance, and the backup of the corresponding TDE-
protected database can be restored successfully.

Next steps
In this article, you learned how to migrate a certificate protecting the encryption key of a database with
Transparent Data Encryption, from the on-premises or IaaS SQL Server instance to Azure SQL Managed
Instance.
See Restore a database backup to a Azure SQL Managed Instance to learn how to restore a database backup to
Azure SQL Managed Instance.
Prepare your environment for a link - Azure SQL
Managed Instance
7/12/2022 • 13 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to prepare your environment for a Managed Instance link so that you can replicate
databases from SQL Server to Azure SQL Managed Instance.

NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.

Prerequisites
To use the link with Azure SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.

Prepare your SQL Server instance


To prepare your SQL Server instance, you need to validate that:
You're on the minimum supported version.
You've enabled the availability groups feature.
You've added the proper trace flags at startup.
Your databases are in full recovery mode and backed up.
You'll need to restart SQL Server for these changes to take effect.
Install service updates
To check your SQL Server version, run the following Transact-SQL (T-SQL) script on SQL Server:

-- Run on SQL Server


-- Shows the version and CU of the SQL Server
SELECT @@VERSION as 'SQL Server version'

Ensure that your SQL Server version has the appropriate servicing update installed, as listed below. You must
restart your SQL Server instance during the update.

SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T

SQL Server 2022 (16.x) Evaluation Edition Windows Server Must sign up at
Preview https://aka.ms/mi-link-
2022-signup to participate
in preview experience.
SERVIC IN G UP DAT E
SQ L SERVER VERSIO N EDIT IO N S H O ST O S REQ UIREM EN T

SQL Server 2019 (15.x) Enterprise or Developer Windows Server SQL Server 2019 CU15
(KB5008996), or above

SQL Server 2016 (13.x) Enterprise, Standard, or Windows Server SQL Server 2016 SP3 (KB
Developer 5003279) and SQL Server
2016 Azure Connect pack
(KB 5014242)

Create a database master key in the master database


Create database master key in the master database, if not already present. Insert your password in place of
<strong_password> in the script below, and keep it in a confidential and secure place. Run this T-SQL script on
SQL Server:

-- Run on SQL Server


-- Create a master key
USE MASTER
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'

To make sure that you have the database master key, use the following T-SQL script on SQL Server:

-- Run on SQL Server


SELECT * FROM sys.symmetric_keys WHERE name LIKE '%DatabaseMasterKey%'

Enable availability groups


The link feature for SQL Managed Instance relies on the Always On availability groups feature, which isn't
enabled by default. To learn more, review Enable the Always On availability groups feature.
To confirm that the Always On availability groups feature is enabled, run the following T-SQL script on SQL
Server:

-- Run on SQL Server


-- Is Always On enabled on this SQL Server
DECLARE @IsHadrEnabled sql_variant = (select SERVERPROPERTY('IsHadrEnabled'))
SELECT
@IsHadrEnabled as 'Is HADR enabled',
CASE @IsHadrEnabled
WHEN 0 THEN 'Always On availability groups is DISABLED.'
WHEN 1 THEN 'Always On availability groups is ENABLED.'
ELSE 'Unknown status.'
END
as 'HADR status'

The above query will display if Always On availability group is enabled, or not, on your SQL Server.

IMPORTANT
For SQL Server 2016, if you need to enable Always On availability group, you will need to complete extra steps
documented in prepare SQL Server 2016 prerequisites. These extra steps are not required for all higher SQL Server
versions (2019-2022) supported by the link.

If the availability groups feature isn't enabled, follow these steps to enable it, or otherwise skip to the next
section:
1. Open SQL Server Configuration Manager.
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Proper ties .

4. Go to the Always On Availability Groups tab.


5. Select the Enable Always On Availability Groups checkbox, and then select OK .

If using SQL Ser ver 2016 , and if Enable Always On Availability Groups option is disabled with
message This computer is not a node in a failover cluster. , follow extra steps described in prepare
SQL Server 2016 prerequisites. Once you've completed these other steps, come back and retry this
step again.
6. Select OK in the dialog
7. Restart the SQL Server service.
Enable startup trace flags
To optimize the performance of your SQL Managed Instance link, we recommend enabling the following trace
flags at startup:
-T1800 : This trace flag optimizes performance when the log files for the primary and secondary replicas in
an availability group are hosted on disks with different sector sizes, such as 512 bytes and 4K. If both primary
and secondary replicas have a disk sector size of 4K, this trace flag isn't required. To learn more, review
KB3009974.
-T9567 : This trace flag enables compression of the data stream for availability groups during automatic
seeding. The compression increases the load on the processor but can significantly reduce transfer time
during seeding.
To enable these trace flags at startup, use the following steps:
1. Open SQL Server Configuration Manager.
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Proper ties .

4. Go to the Star tup Parameters tab. In Specify a star tup parameter , enter -T1800 and select Add to
add the startup parameter. Then enter -T9567 and select Add to add the other trace flag. Select Apply to
save your changes.

5. Select OK to close the Proper ties window.


To learn more, review the syntax for enabling trace flags.
Restart SQL Server and validate the configuration
After you've ensured that you're on a supported version of SQL Server, enabled the Always On availability
groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these
changes:
1. Open SQL Ser ver Configuration Manager .
2. Select SQL Ser ver Ser vices from the left pane.
3. Right-click the SQL Server service, and then select Restar t .

After the restart, run the following T-SQL script on SQL Server to validate the configuration of your SQL Server
instance:

-- Run on SQL Server


-- Shows the version and CU of SQL Server
SELECT @@VERSION as 'SQL Server version'

-- Shows if the Always On availability groups feature is enabled


SELECT SERVERPROPERTY ('IsHadrEnabled') as 'Is Always On enabled? (1 true, 0 false)'

-- Lists all trace flags enabled on SQL Server


DBCC TRACESTATUS

Your SQL Server version should be one of the supported versions with service updates applied, the Always On
availability groups feature should be enabled, and you should have the trace flags -T1800 and -T9567 enabled.
The following screenshot is an example of the expected outcome for a SQL Server instance that has been
properly configured:

Configure network connectivity


For the link to work, you must have network connectivity between SQL Server and SQL Managed Instance. The
network option that you choose depends on where your SQL Server instance resides - whether it's on-premises
or on a virtual machine (VM).
SQL Server on Azure Virtual Machines
Deploying SQL Server on Azure Virtual Machines in the same Azure virtual network that hosts SQL Managed
Instance is the simplest method, because network connectivity will automatically exist between the two
instances. To learn more, see the detailed tutorial Deploy and configure an Azure VM to connect to Azure SQL
Managed Instance.
If your SQL Server on Azure Virtual Machines instance is in a different virtual network from your managed
instance, either connect the two Azure virtual networks by using global virtual network peering or configure
VPN gateways.

NOTE
Global virtual network peering is enabled by default on managed instances provisioned after November 2020. Raise a
support ticket to enable global virtual network peering on older instances.

SQL Server outside Azure


If your SQL Server instance is hosted outside Azure, establish a VPN connection between SQL Server and SQL
Managed Instance by using either of these options:
Site-to-site VPN connection
Azure ExpressRoute connection

TIP
We recommend ExpressRoute for the best network performance when you're replicating data. Provision a gateway with
enough bandwidth for your use case.

Network ports between the environments


Regardless of the connectivity mechanism, there are requirements that must be met for the network traffic to
flow between the environments:
The Network Security Group (NSG) rules on the subnet hosting managed instance needs to allow:
Inbound traffic on port 5022 and port range 11000-11999 from the network hosting SQL Server
Firewall on the network hosting SQL Server, and the host OS needs to allow:
Inbound traffic on port 5022 from the entire subnet range hosting SQL Managed Instance

Port numbers can't be changed or customized. IP address ranges of subnets hosting managed instance, and SQL
Server must not overlap.
The following table describes port actions for each environment:
EN VIRO N M EN T W H AT TO DO

SQL Server (in Azure) Open both inbound and outbound traffic on port 5022 for
the network firewall to the entire subnet IP range of SQL
Managed Instance. If necessary, do the same on the SQL
Server host OS (Windows/Linux) firewall. Create a network
security group (NSG) rule in the virtual network that hosts
the VM to allow communication on port 5022.

SQL Server (outside Azure) Open both inbound and outbound traffic on port 5022 for
the network firewall to the entire subnet IP range of SQL
Managed Instance. If necessary, do the same on the SQL
Server host OS (Windows/Linux) firewall.

SQL Managed Instance Create an NSG rule in Azure portal to allow inbound and
outbound traffic from the IP address and the networking
hosting SQL Server on port 5022 and port range 11000-
11999.

Use the following PowerShell script on the Windows host OS of the SQL Server instance to open ports in the
Windows firewall:

New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow
-LocalPort 5022 -Protocol TCP
New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbound -Profile Any -Action
Allow -LocalPort 5022 -Protocol TCP

Test bidirectional network connectivity


Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the link to
work. After you open ports on the SQL Server side and configure an NSG rule on the SQL Managed Instance
side, test connectivity.
Test the connection from SQL Server to SQL Managed Instance
To check if SQL Server can reach SQL Managed Instance, use the following tnc command in PowerShell from
the SQL Server host machine. Replace <ManagedInstanceFQDN> with the fully qualified domain name (FQDN) of
the managed instance. You can copy the FQDN from the managed instance's overview page in the Azure portal.

tnc <ManagedInstanceFQDN> -port 5022

A successful test shows TcpTestSucceeded : True .

If the response is unsuccessful, verify the following network settings:


There are rules in both the network firewall and the SQL Server host OS (Windows/Linux) firewall that allows
traffic to the entire subnet IP range of SQL Managed Instance.
There's an NSG rule that allows communication on port 5022 for the virtual network that hosts SQL
Managed Instance.
Test the connection from SQL Managed Instance to SQL Server
To check that SQL Managed Instance can reach SQL Server, you first create a test endpoint. Then you use the
SQL Agent to run a PowerShell script with the tnc command pinging SQL Server on port 5022 from the
managed instance.
To create a test endpoint, connect to SQL Server and run the following T-SQL script:

-- Run on SQL Server


-- Create the certificate needed for the test endpoint
USE MASTER
CREATE CERTIFICATE TEST_CERT
WITH SUBJECT = N'Certificate for SQL Server',
EXPIRY_DATE = N'3/30/2051'
GO

-- Create the test endpoint on SQL Server


USE MASTER
CREATE ENDPOINT TEST_ENDPOINT
STATE=STARTED
AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
FOR DATABASE_MIRRORING (
ROLE=ALL,
AUTHENTICATION = CERTIFICATE TEST_CERT,
ENCRYPTION = REQUIRED ALGORITHM AES
)

To verify that the SQL Server endpoint is receiving connections on port 5022, run the following PowerShell
command on the host operating system of your SQL Server instance:

tnc localhost -port 5022

A successful test shows TcpTestSucceeded : True . You can then proceed to creating a SQL Agent job on the
managed instance to try testing the SQL Server test endpoint on port 5022 from the managed instance.
Next, create a SQL Agent job on the managed instance called NetHelper by running the following T-SQL script
on the managed instance. Replace:
<SQL_SERVER_IP_ADDRESS> with the IP address of SQL Server that can be accessed from managed instance.
-- Run on managed instance
-- SQL_SERVER_IP_ADDRESS should be an IP address that could be accessed from the SQL Managed Instance host
machine.
DECLARE @SQLServerIpAddress NVARCHAR(MAX) = '<SQL_SERVER_IP_ADDRESS>' -- insert your SQL Server IP address
in here
DECLARE @tncCommand NVARCHAR(MAX) = 'tnc ' + @SQLServerIpAddress + ' -port 5022 -InformationLevel Quiet'
DECLARE @jobId BINARY(16)

IF EXISTS(select * from msdb.dbo.sysjobs where name = 'NetHelper') THROW 70000, 'Agent job NetHelper already
exists. Please rename the job, or drop the existing job before creating it again.', 1
-- To delete NetHelper job run: EXEC msdb.dbo.sp_delete_job @job_name=N'NetHelper'

EXEC msdb.dbo.sp_add_job @job_name=N'NetHelper',


@enabled=1,
@description=N'Test Managed Instance to SQL Server network connectivity on port 5022.',
@category_name=N'[Uncategorized (Local)]',
@owner_login_name=N'cloudSA', @job_id = @jobId OUTPUT

EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'TNC network probe from MI to SQL Server',


@step_id=1,
@os_run_priority=0, @subsystem=N'PowerShell',
@command = @tncCommand,
@database_name=N'master',
@flags=40

EXEC msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1

EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'

TIP
In case that you need to modify the IP address of your SQL Server for the connectivity probe from managed instance,
delete NetHelper job by running EXEC msdb.dbo.sp_delete_job @job_name=N'NetHelper' , and re-create NetHelper job
using the script above.

Then, create a stored procedure ExecuteNetHelper that will help run the job and obtain results from the network
probe. Run the following T-SQL script on managed instance:
-- Run on managed instance
IF EXISTS(SELECT * FROM sys.objects WHERE name = 'ExecuteNetHelper')
THROW 70001, 'Stored procedure ExecuteNetHelper already exists. Rename or drop the existing procedure
before creating it again.', 1
GO
CREATE PROCEDURE ExecuteNetHelper AS
-- To delete the procedure run: DROP PROCEDURE ExecuteNetHelper
BEGIN
-- Start the job.
DECLARE @NetHelperstartTimeUtc datetime = getutcdate()
DECLARE @stop_exec_date datetime = null
EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper'

-- Wait for job to complete and then see the outcome.


WHILE (@stop_exec_date is null)
BEGIN

-- Wait and see if the job has completed.


WAITFOR DELAY '00:00:01'
SELECT @stop_exec_date = sja.stop_execution_date
FROM msdb.dbo.sysjobs sj JOIN msdb.dbo.sysjobactivity sja ON sj.job_id = sja.job_id
WHERE sj.name = 'NetHelper'

-- If job has completed, get the outcome of the network test.


IF (@stop_exec_date is not null)
BEGIN
SELECT
sj.name JobName, sjsl.date_modified as 'Date executed', sjs.step_name as 'Step executed', sjsl.log
as 'Connectivity status'
FROM
msdb.dbo.sysjobs sj
LEFT OUTER JOIN msdb.dbo.sysjobsteps sjs ON sj.job_id = sjs.job_id
LEFT OUTER JOIN msdb.dbo.sysjobstepslogs sjsl ON sjs.step_uid = sjsl.step_uid
WHERE
sj.name = 'NetHelper'
END

-- In case of operation timeout (90 seconds), print timeout message.


IF (datediff(second, @NetHelperstartTimeUtc, getutcdate()) > 90)
BEGIN
SELECT 'NetHelper timed out during the network check. Please investigate SQL Agent logs for more
information.'
BREAK;
END
END
END

Run the following query on managed instance to execute the stored procedure that will execute the NetHelper
agent job and show the resulting log:

-- Run on managed instance


EXEC ExecuteNetHelper

If the connection was successful, the log will show True . If the connection was unsuccessful, the log will show
False .

If the connection was unsuccessful, verify the following items:


The firewall on the host SQL Server instance allows inbound and outbound communication on port 5022.
An NSG rule for the virtual network that hosts SQL Managed Instance allows communication on port 5022.
If your SQL Server instance is on an Azure VM, an NSG rule allows communication on port 5022 on the
virtual network that hosts the VM.
SQL Server is running.
There exists test endpoint on SQL Server.
After resolving issues, rerun NetHelper network probe again by running EXEC ExecuteNetHelper on managed
instance.
Finally, after the network test has been successful, drop the test endpoint and certificate on SQL Server by using
the following T-SQL commands:

-- Run on SQL Server


DROP ENDPOINT TEST_ENDPOINT
GO
DROP CERTIFICATE TEST_CERT
GO

Cau t i on

Proceed with the next steps only if you've validated network connectivity between your source and target
environments. Otherwise, troubleshoot network connectivity issues before proceeding.

Migrate a certificate of a TDE-protected database (optional)


If you're migrating a SQL Server database protected by Transparent Data Encryption (TDE) to a managed
instance, you must migrate the corresponding encryption certificate from the on-premises or Azure VM SQL
Server instance to the managed instance before using the link. For detailed steps, see Migrate a TDE certificate
to a managed instance.

Install SSMS
SQL Server Management Studio (SSMS) is the easiest way to use a SQL Managed Instance link. Download
SSMS version 18.12, or later and install it to your client machine.
After installation finishes, open SSMS and connect to your supported SQL Server instance. Right-click a user
database and validate that the Azure SQL Managed Instance link option appears on the menu.
Next steps
After you've prepared your environment, you're ready to start replicating your database. To learn more,
review Link feature for Azure SQL Managed Instance.
Prepare SQL Server 2016 prerequisites - Azure SQL
Managed Instance link
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to enable Always On with Windows Server Failover Cluster (WSFC) on your SQL
Server 2016 as an extra step to prepare your environment for Managed Instance link.
Extra steps described in this guide are mandatory for SQL Server 2016 only, as this version of SQL Server can't
enable Always On option without Windows Server Failover Cluster present on the host Windows OS machine.
The minimum requirement to enable Always On on SQL Server 2016 is to create a local single-node (single
machine) cluster. No multiple nodes, and therefore no additional SQL Servers, are required. The link can
however also support multiple-node cluster configurations as optional, in case you have this type of
environment for any SQL Server version (2016-2022).

Install WSFC module on Windows Server


Run the following PowerShell command as Administrator on Windows Server hosting the SQL Server to install
Windows Server Failover Cluster module.

# Run as Administrator in PowerShell on Windows Server OS hosting the SQL Server


# This installs WSFC module on the host OS
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

Alternatively, you can also use Server Manager to install WSFC module using the graphical user interface.

Create single-node cluster


The next step is to create a cluster on the Windows OS hosting SQL Server. This can be achieved using two
methods:
1. Simple PowerShell command -- has certain limitations listed below, or
2. Failover Cluster Manager application -- provides full configuration flexibility.
Both methods are described below.
Create cluster using PowerShell
The simplest method to create a local single-node cluster is through executing a simple PowerShell command
on the Windows Server OS hosting SQL Server. This method has limitations as it is intended for single-server
machines not joined in a domain. Creating a cluster using this method won't allow you to administer it using the
graphical user interface Failover Cluster Manager.
If you need a quick way to create a single-node cluster on your machine, execute the below provided PowerShell
command. Replace:
<ClusterName> in the script with your desired cluster name. The name should be a single word, with no
spaces or special characters (for example WSFCluster )
# Run as Administrator in PowerShell on Windows Server OS hosting the SQL Server
# This creates a single-node cluster on the host OS, not joined in the domain
New-Cluster -Name "<ClusterName>" -AdministrativeAccessPoint None -Verbose -Force

In case you need to remove the cluster in the future, for some reason, this can only be done with PowerShell
command Remove-Cluster .
If you have successfully created cluster using this method, skip ahead to Grant permissions in SQL Server for
WSFC
Create cluster using Failover Cluster Manager application
Alternatively, a more flexible way to create a cluster on the Windows OS hosting the SQL Server is through the
graphical user interface, using the Failover Cluster Manager application. Follow these steps:
1. Find out your Windows Server name by executing hostname command from the command prompt.
2. Record the output of this command (sample output marked in the image below), or keep this window
open as you'll use this name in one of the next steps.

3. Open Failover Cluster Manager by pressing Windows key + R on the keyboard, type
%windir%\system32\Cluadmin.msc , and click OK.

Alternatively, Failover Cluster Manager can be accessed by opening Server Manager, selecting Tools in
the upper right corner, and then selecting Failover Cluster Manager.
4. In Windows Cluster manager, click on Create Cluster option.

5. On the Before You Begin screen, click Next.


6. On the Select Server screen, enter your Windows Server name (type, or copy-paste the output from the
earlier executed hostname command), click Add, and then Next.
7. On the Validation Warning screen, leave Yes on, click Next.
8. On the Before You Being screen, click Next.
9. On the Testing Options screen, leave Run all tests on, and click Next.
10. On the Confirmation screen, click Next.
11. On the Validation screen, wait for the validation to complete.
12. On the Summary screen, click Finish.
13. On the Access Point for Administering the Cluster screen, type your cluster name, for example
WSFCluster , and then click Next.

14. On the Confirmation screen, click Next.


15. On the Creating New Cluster screen, wait for the creation to complete.
16. On the Summary screen, click Finish.
With the above steps, you've created the local single-node Windows Server Failover Cluster.

Verification
To verify that single-node WSFC cluster has been created, follow these steps:
1. In the Failover Cluster Manager, click on the cluster name on the left-hand side, and expand it by clicking
on the > arrow.
In case that you've closed and reopened Failover Cluster Manager after its creation, the cluster name
might not show up on the left-hand side (see the image below).
2. Click on Connect to Cluster on the right-hand side, choose to connect to <Cluster on this server...> ,
and click OK.
3. Click on Nodes.

You should be able to see the local machine single-node added to this cluster and with the Status
being Up . This verification confirms the WSFC configuration has been completed successfully. You
can now close the Failover Cluster Manager tool.
Next, verify that Always On option can be enabled on SQL Server by following these steps:
1. Open SQL Server Configuration Manager
2. Double-click on SQL Server
3. Click on Always On High Availability tab

You should be able to see the name of the WSFC you've created, and you should be able to check-on
the Enable Always On Availability Groups should option. This verification confirms the configuration
has been completed successfully.

Grant permissions in SQL Server for WSFC


IMPORTANT
Granting permissions in SQL Server 2016 to Windows OS system account is mandatory. These permissions enable the
SQL Server to work with Windows Server Failover Cluster. Without these permissions, creating an Availability Group on
SQL Server 2016 will fail.

Next, grant permissions on SQL Server to NT Authority \ System Windows host OS system account, to enable
creation of Availability Groups in SQL Server using WSFC. Execute the following T-SQL script on your SQL
Server:
1. Log in to your SQL Server, using a client such is SSMS
2. Execute the following T-SQL script

-- Run on SQL Server


-- Grant permissions to NT Authority \ System to create AG on this SQL Server
GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]
GO
GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]
GO
GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]
GO

Next steps
Continue environment preparation for the link by returning to enable Always On on your SQL Server section
in prepare your environment for a link guide.
To learn more about configuring multiple-node WSFC (not mandatory, and only optional for the link), see
Create a failover cluster guide for Windows Server.
Replicate a database by using the link feature in
SSMS - Azure SQL Managed Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to replicate your database from SQL Server to Azure SQL Managed Instance by
using the link feature in SQL Server Management Studio (SSMS).

NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.

Prerequisites
To replicate your databases to SQL Managed Instance through the link, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
SQL Server Management Studio v18.12 or later.
A properly prepared environment.
Set up database recovery and backup
All databases that will be replicated via the link must be in full recovery mode and have at least one full backup.
Use SSMS to back up your database. Follow these steps:
1. In SSMS, right-click on a database name on SQL Server
2. Select Tasks, and then click on Backup Up.
3. Ensure Backup type is Full.
4. Ensure Backup-to option has the backup path to a disk with sufficient free storage space available.
5. Click on OK to complete the full backup.
For more information, see Create a Full Database Backup.

Replicate a database
In the following steps, you use the Managed Instance link wizard in SSMS to create the link between SQL
Server and SQL Managed Instance. After you create the link, your source database gets a read-only replica copy
on your target managed instance.

NOTE
The link supports replication of user databases only. Replication of system databases is not supported. To replicate
instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL
scripts on the destination instance.

1. Open SSMS and connect to your SQL Server instance.


2. In Object Explorer, right-click your database, hover over Azure SQL Managed Instance link , and select
Replicate database to open the New Managed Instance link wizard. If your SQL Server version isn't
supported, this option won't be available on the context menu.

3. On the Introduction page of the wizard, select Next .


4. On the SQL Ser ver requirements page, the wizard validates requirements to establish a link to SQL
Managed Instance. Select Next after all the requirements are validated.

5. On the Select Databases page, choose one or more databases that you want to replicate to SQL
Managed Instance via the link feature. Then select Next .
6. On the Login to Azure and select Managed Instance page, select Sign In to sign in to Microsoft
Azure.

If you're running SSMS on Windows Server, the login screen in some cases might not show up with
the error message
Content within this application coming from the website listed below is being blocked by Internet
Explorer Enhanced Security Configuration.
. This happens when Windows Server blocks web content from rendering due to security settings
configuration. In this case, you'll need to turn off Internet Explorer ESC on Windows servers.
7. On the Login to Azure and select Managed Instance page, choose the subscription, resource group,
and target managed instance from the dropdown lists. Select Login and provide login details for SQL
Managed Instance. After you've provided all necessary information, select Next .

8. Review the prepopulated values on the Specify Distributed AG Options page, and change any that
need customization. When you're ready, select Next .
9. Review the actions on the Summar y page. Optionally, select Script to create a script that you can run at
a later time. When you're ready, select Finish .

10. The Executing actions page displays the progress of each action.
11. After all steps finish, the Results page shows check marks next to the successfully completed actions. You
can now close the window.

View a replicated database


After the link is created, the selected databases are replicated to the managed instance.
Use Object Explorer on your SQL Server instance to view the Synchronized status of the replicated database.
Expand Always On High Availability and Availability Groups to view the distributed availability group
that's created for the link.

Connect to your managed instance and use Object Explorer to view your replicated database. Depending on the
database size and network speed, the database might initially be in a Restoring state. After initial seeding
finishes, the database is restored to the managed instance and ready for read-only workloads.
Next steps
To break the link and fail over your database to SQL Managed Instance, see Failover a database. To learn more,
see Link feature for Azure SQL Managed Instance.
Replicate a database with the link feature via T-SQL
and PowerShell scripts - Azure SQL Managed
Instance
7/12/2022 • 21 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts to replicate your database from
SQL Server to Azure SQL Managed Instance by using a Managed Instance link.

NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.
You can also use a SQL Server Management Studio (SSMS) wizard to set up the link to replicate your database.

Prerequisites
To replicate your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
PowerShell module Az.SQL 3.9.0, or higher
A properly prepared environment.

Set up database recovery and backup


All databases that will be replicated via the link must be in full recovery mode and have at least one backup. Run
the following code on SQL Server for all databases you wish to replicate. Replace <DatabaseName> with your
actual database name.

-- Run on SQL Server


-- Set full recovery mode for all databases you want to replicate.
ALTER DATABASE [<DatabaseName>] SET RECOVERY FULL
GO

-- Execute backup for all databases you want to replicate.


BACKUP DATABASE [<DatabaseName>] TO DISK = N'<DiskPath>'
GO

For more information, see Create a Full Database Backup.

Replicate a database
Use the following instructions to manually set up the link between your SQL Server instance and managed
instance. After the link is created, your source database gets a read-only replica copy on your target managed
instance.
NOTE
The link supports replication of user databases only. Replication of system databases is not supported. To replicate
instance-level objects (stored in master or msdb databases), we recommend that you script them out and run T-SQL
scripts on the destination instance.

Terminology and naming conventions


As you run scripts from this user guide, it's important not to mistake SQL Server and SQL Managed Instance
names for their fully qualified domain names (FQDNs). The following table explains what the various names
exactly represent and how to obtain their values:

T ERM IN O LO GY DESC RIP T IO N H O W TO F IN D O UT

SQL Server name Short, single-word SQL Server name. Run SELECT @@SERVERNAME from T-
For example: sqlserver1. SQL.

SQL Server FQDN Fully qualified domain name (FQDN) of See your network (DNS) configuration
your SQL Server. For example: on-premises, or the server name if
sqlserver1.domain.com. you're using an Azure virtual machine
(VM).

SQL Managed Instance name Short, single-word SQL Managed See the name of your managed
Instance name. For example: instance in the Azure portal.
managedinstance1.

SQL Managed Instance FQDN Fully qualified domain name (FQDN) of See the host name on the SQL
your SQL Managed Instance. For Managed Instance overview page in
example: the Azure portal.
managedinstance1.6d710bcf372b.data
base.windows.net.

Resolvable domain name DNS name that can be resolved to an Run nslookup command from the
IP address. For example, running command prompt.
nslookup sqlserver1.domain.com
should return an IP address such as
10.0.0.1.

SQL Server IP IP address of your SQL Server. In case Run ipconfig command from the
of multiple IPs on SQL Server, choose command prompt of host OS running
IP address that is accessible from the SQL Server.
Azure.

Establish trust between instances


The first step in setting up a link is to establish trust between the two instances and secure the endpoints that
are used to communicate and encrypt data across the network. Distributed availability groups use the existing
availability group database mirroring endpoint, rather than having their own dedicated endpoint. This is why
security and trust need to be configured between the two entities through the availability group database
mirroring endpoint.
NOTE
The link is based on the Always On technology. Database mirroring endpoint is a special-purpose endpoint that is used
exclusively by Always On to receive connections from other server instances. The term database mirroring endpoint
should not be mistaken with, and it's not the same as the legacy SQL Server database mirroring feature.

Certificate-based trust is the only supported way to secure database mirroring endpoints on SQL Server and
SQL Managed Instance. If you've existing availability groups that use Windows authentication, you need to add
certificate-based trust to the existing mirroring endpoint as a secondary authentication option. You can do this
by using the ALTER ENDPOINT statement, as shown further in this article.

IMPORTANT
Certificates are generated with an expiration date and time. They must be renewed and rotated before they expire.

Here's an overview of the process to secure database mirroring endpoints for both SQL Server and SQL
Managed Instance:
1. Generate a certificate on SQL Server and obtain its public key.
2. Obtain a public key of the SQL Managed Instance certificate.
3. Exchange the public keys between SQL Server and SQL Managed Instance.
4. Import Azure-trusted root certificate authority keys to SQL Server
The following sections describe these steps in detail.
Create a certificate on SQL Server and import its public key to SQL Managed Instance
First, create database master key in the master database, if not already present. Insert your password in place of
<strong_password> in the script below, and keep it in a confidential and secure place. Run this T-SQL script on
SQL Server:

-- Run on SQL Server


-- Create a master key encryption password
-- Keep the password confidential and in a secure place
USE MASTER
IF NOT EXISTS (SELECT * FROM sys.symmetric_keys WHERE symmetric_key_id = 101)
BEGIN
PRINT 'Creating master key.' + CHAR(13) + 'Keep the password confidential and in a secure place.'
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<strong_password>'
END
ELSE
PRINT 'Master key already exists.'
GO

Then, generate an authentication certificate on SQL Server. In the script below replace:
@cert_expiry_date with the desired certificate expiration date (future date).

Record this date and set a self-reminder to rotate (update) SQL server certificate before its expiry to ensure
continuous operation of the link.

IMPORTANT
It is strongly recommended to use the auto-generated certificate name from this script. While customizing your own
certificate name on SQL Server is allowed, this name should not contain any \ characters.
-- Create the SQL Server certificate for the instance link
USE MASTER

-- Customize SQL Server certificate expiration date by adjusting the date below
DECLARE @cert_expiry_date AS varchar(max)='03/30/2025'

-- Build the query to generate the certificate


DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
DECLARE @sqlserver_certificate_subject NVARCHAR(MAX) = N'Certificate for ' + @sqlserver_certificate_name
DECLARE @create_sqlserver_certificate_command NVARCHAR(MAX) = N'CREATE CERTIFICATE [' +
@sqlserver_certificate_name + '] ' + char (13) +
' WITH SUBJECT = ''' + @sqlserver_certificate_subject + ''',' + char (13) +
' EXPIRY_DATE = '''+ @cert_expiry_date + ''''+ char (13)
IF NOT EXISTS (SELECT name from sys.certificates WHERE name = @sqlserver_certificate_name)
BEGIN
PRINT (@create_sqlserver_certificate_command)
-- Execute the query to create SQL Server certificate for the instance link
EXEC sp_executesql @stmt = @create_sqlserver_certificate_command
END
ELSE
PRINT 'Certificate ' + @sqlserver_certificate_name + ' already exists.'
GO

Then, use the following T-SQL query on SQL Server to verify that the certificate has been created:

-- Run on SQL Server


USE MASTER
GO
SELECT * FROM sys.certificates WHERE pvt_key_encryption_type = 'MK'

In the query results, you'll see that the certificate has been encrypted with the master key.
Now, you can get the public key of the generated certificate on SQL Server:

-- Run on SQL Server


-- Show the name and the public key of generated SQL Server certificate
USE MASTER
GO
DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
DECLARE @PUBLICKEYENC VARBINARY(MAX) = CERTENCODED(CERT_ID(@sqlserver_certificate_name));
SELECT @sqlserver_certificate_name as 'SQLServerCertName'
SELECT @PUBLICKEYENC AS SQLServerPublicKey;

Save values of SQLServerCertName and SQLServerPublicKey from the output, because you'll need it for the next
step.
For the next step, use PowerShell with the installed Az.Sql module 3.9.0, or higher. Or preferably, use Azure
Cloud Shell online from the web browser to run the commands, because it's always updated with the latest
module versions.
First, ensure that you're logged in to Azure and that you've selected the subscription where your managed
instance is hosted. Selecting the proper subscription is especially important if you have more than one Azure
subscription on your account. Replace:
<SubscriptionID> with your Azure subscription ID.
# Run in Azure Cloud Shell (select PowerShell console)

# Enter your Azure subscription ID


$SubscriptionID = "<SubscriptionID>"

# Login to Azure and select subscription ID


if ((Get-AzContext ) -eq $null)
{
echo "Logging to Azure subscription"
Login-AzAccount
}
Select-AzSubscription -SubscriptionName $SubscriptionID

Then, run the following script in Azure Cloud Shell (PowerShell console). Fill out necessary user information,
copy it, paste it, and then run the script. Replace:
<SQLServerPublicKey> with the public portion of the SQL Server certificate in binary format, which you've
recorded in the previous step. It's a long string value that starts with 0x .
<SQLServerCertName> with the SQL Server certificate name you've recorded in the previous step.
<ManagedInstanceName> with the short name of your managed instance.

# Run in Azure Cloud Shell (select PowerShell console)


# ===============================================================================
# POWERSHELL SCRIPT TO IMPORT SQL SERVER PUBLIC CERTIFICATE TO MANAGED INSTANCE
# ===== Enter user variables here ====

# Enter the name for the server SQLServerCertName certificate – for example, "Cert_sqlserver1_endpoint"
$CertificateName = "<SQLServerCertName>"

# Insert the certificate public key blob that you got from SQL Server – for example, "0x1234567..."
$PublicKeyEncoded = "<SQLServerPublicKey>"

# Enter your managed instance short name – for example, "sqlmi"


$ManagedInstanceName = "<ManagedInstanceName>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# Upload the public key of the authentication certificate from SQL Server to Azure.
New-AzSqlInstanceServerTrustCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName
-Name $CertificateName -PublicKey $PublicKeyEncoded

The result of this operation will be a summary of the uploaded SQL Server certificate to Azure.
In case this is needed, to see all SQL Server certificates uploaded on a managed instance, use Get-
AzSqlInstanceServerTrustCertificate PowerShell command in Azure Cloud Shell. To remove SQL Server
certificate uploaded on a managed instance, use Remove-AzSqlInstanceServerTrustCertificate PowerShell
command in Azure Cloud Shell.
Get the certificate public key from SQL Managed Instance and import it to SQL Server
The certificate for securing the link endpoint is automatically generated on Azure SQL Managed Instance. This
section describes how to get the certificate public key from SQL Managed Instance, and how to import it to SQL
Server.
Run the following script in Azure Cloud Shell. Replace:
<SubscriptionID> with your Azure subscription ID.
<ManagedInstanceName> with the short name of your managed instance.
# Run in Azure Cloud Shell (select PowerShell console)
# ===============================================================================
# POWERSHELL SCRIPT TO EXPORT MANAGED INSTANCE PUBLIC CERTIFICATE
# ===== Enter user variables here ====

# Enter your managed instance short name – for example, "sqlmi"


$ManagedInstanceName = "<ManagedInstanceName>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# Fetch the public key of the authentication certificate from Managed Instance. Outputs a binary key in the
property PublicKey.
Get-AzSqlInstanceEndpointCertificate -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -
EndpointType "DATABASE_MIRRORING" | out-string

Copy the entire PublicKey output (starts with 0x ) from the Azure Cloud Shell as you'll require it in the next step.
Alternatively, if you encounter issues in copy-pasting the PublicKey from Azure Cloud Shell console, you could
also run T-SQL command EXEC sp_get_endpoint_certificate 4 on managed instance to obtain its public key for
the link endpoint.
Next, import the obtained public key of managed instance security certificate to SQL Server. Run the following
query on SQL Server. Replace:
<ManagedInstanceFQDN> with the fully qualified domain name of managed instance.
<PublicKey> with the PublicKey value obtained in the previous step (from Azure Cloud Shell, starting with
0x ). You don't need to use quotation marks.

IMPORTANT
The name of the certificate must be SQL Managed Instance FQDN and should not be modified. The link will not be
operational if using a custom name.

-- Run on SQL Server


USE MASTER
CREATE CERTIFICATE [<ManagedInstanceFQDN>]
FROM BINARY = <PublicKey>

Import Azure -trusted root certificate authority keys to SQL Server


Importing public root certificate keys of Microsoft and DigiCert certificate authorities (CA) to SQL Server is
required for your SQL Server to trust certificates issued by Azure for database.windows.net domains.
First, import Microsoft PKI root-authority certificate on SQL Server:
-- Run on SQL Server
-- Import Microsoft PKI root-authority certificate (trusted by Azure), if not already present
IF NOT EXISTS (SELECT name FROM sys.certificates WHERE name = N'MicrosoftPKI')
BEGIN
PRINT 'Creating MicrosoftPKI certificate.'
CREATE CERTIFICATE [MicrosoftPKI] FROM BINARY =
0x308205A830820390A00302010202101ED397095FD8B4B347701EAABE7F45B3300D06092A864886F70D01010C05003065310B300906
0355040613025553311E301C060355040A13154D6963726F736F667420436F72706F726174696F6E313630340603550403132D4D6963
726F736F66742052534120526F6F7420436572746966696361746520417574686F726974792032303137301E170D3139313231383232
353132325A170D3432303731383233303032335A3065310B3009060355040613025553311E301C060355040A13154D6963726F736F66
7420436F72706F726174696F6E313630340603550403132D4D6963726F736F66742052534120526F6F74204365727469666963617465
20417574686F72697479203230313730820222300D06092A864886F70D01010105000382020F003082020A0282020100CA5BBE94338C
299591160A95BD4762C189F39936DF4690C9A5ED786A6F479168F8276750331DA1A6FBE0E543A3840257015D9C4840825310BCBFC73B
6890B6822DE5F465D0CC6D19CC95F97BAC4A94AD0EDE4B431D8707921390808364353904FCE5E96CB3B61F50943865505C1746B9B685
B51CB517E8D6459DD8B226B0CAC4704AAE60A4DDB3D9ECFC3BD55772BC3FC8C9B2DE4B6BF8236C03C005BD95C7CD733B668064E31AAC
2EF94705F206B69B73F578335BC7A1FB272AA1B49A918C91D33A823E7640B4CD52615170283FC5C55AF2C98C49BB145B4DC8FF674D4C
1296ADF5FE78A89787D7FD5E2080DCA14B22FBD489ADBACE479747557B8F45C8672884951C6830EFEF49E0357B64E798B094DA4D853B
3E55C428AF57F39E13DB46279F1EA25E4483A4A5CAD513B34B3FC4E3C2E68661A45230B97A204F6F0F3853CB330C132B8FD69ABD2AC8
2DB11C7D4B51CA47D14827725D87EBD545E648659DAF5290BA5BA2186557129F68B9D4156B94C4692298F433E0EDF9518E4150C9344F
7690ACFC38C1D8E17BB9E3E394E14669CB0E0A506B13BAAC0F375AB712B590811E56AE572286D9C9D2D1D751E3AB3BC655FD1E0ED374
0AD1DAAAEA69B897288F48C407F852433AF4CA55352CB0A66AC09CF9F281E1126AC045D967B3CEFF23A2890A54D414B92AA8D7ECF9AB
CD255832798F905B9839C40806C1AC7F0E3D00A50203010001A3543052300E0603551D0F0101FF040403020186300F0603551D130101
FF040530030101FF301D0603551D0E0416041409CB597F86B2708F1AC339E3C0D9E9BFBB4DB223301006092B06010401823715010403
020100300D06092A864886F70D01010C05000382020100ACAF3E5DC21196898EA3E792D69715B813A2A6422E02CD16055927CA20E8BA
B8E81AEC4DA89756AE6543B18F009B52CD55CD53396D624C8B0D5B7C2E44BF83108FF3538280C34F3AC76E113FE6E3169184FB6D847F
3474AD89A7CEB9D7D79F846492BE95A1AD095333DDEE0AEA4A518E6F55ABBAB59446AE8C7FD8A2502565608046DB3304AE6CB5987454
25DC93E4F8E355153DB86DC30AA412C169856EDF64F15399E14A75209D950FE4D6DC03F15918E84789B2575A94B6A9D8172B1749E576
CBC156993A37B1FF692C919193E1DF4CA337764DA19FF86D1E1DD3FAECFBF4451D136DCFF759E52227722B86F357BB30ED244DDC7D56
BBA3B3F8347989C1E0F20261F7A6FC0FBB1C170BAE41D97CBD27A3FD2E3AD19394B1731D248BAF5B2089ADB7676679F53AC6A69633FE
5392C846B11191C6997F8FC9D66631204110872D0CD6C1AF3498CA6483FB1357D1C1F03C7A8CA5C1FD9521A071C193677112EA8F880A
691964992356FBAC2A2E70BE66C40C84EFE58BF39301F86A9093674BB268A3B5628FE93F8C7A3B5E0FE78CB8C67CEF37FD74E2C84F33
72E194396DBD12AFBE0C4E707C1B6F8DB332937344166DE8F4F7E095808F965D38A4F4ABDE0A308793D84D00716245274B3A42845B7F
65B76734522D9C166BAAA8D87BA3424C71C70CCA3E83E4A6EFB701305E51A379F57069A641440F86B02C91C63DEAAE0F84

--Trust certificates issued by Microsoft PKI root authority for Azure database.windows.net domains
DECLARE @CERTID int
SELECT @CERTID = CERT_ID('MicrosoftPKI')
EXEC sp_certificate_add_issuer @CERTID, N'*.database.windows.net'
END
ELSE
PRINT 'Certificate MicrosoftPKI already exsits.'
GO

Then, import DigiCert PKI root-authority certificate on SQL Server:


-- Execute on SQL Server
-- Import DigiCert PKI root-authority certificate trusted by Azure to SQL Server, if not already present
IF NOT EXISTS (SELECT name FROM sys.certificates WHERE name = N'DigiCertPKI')
BEGIN
PRINT 'Creating DigiCertPKI certificate.'
CREATE CERTIFICATE [DigiCertPKI] FROM BINARY =
0x3082038E30820276A0030201020210033AF1E6A711A9A0BB2864B11D09FAE5300D06092A864886F70D01010B05003061310B300906
035504061302555331153013060355040A130C446967694365727420496E6331193017060355040B13107777772E6469676963657274
2E636F6D3120301E06035504031317446967694365727420476C6F62616C20526F6F74204732301E170D313330383031313230303030
5A170D3338303131353132303030305A3061310B300906035504061302555331153013060355040A130C446967694365727420496E63
31193017060355040B13107777772E64696769636572742E636F6D3120301E06035504031317446967694365727420476C6F62616C20
526F6F7420473230820122300D06092A864886F70D01010105000382010F003082010A0282010100BB37CD34DC7B6BC9B26890AD4A75
FF46BA210A088DF51954C9FB88DBF3AEF23A89913C7AE6AB061A6BCFAC2DE85E092444BA629A7ED6A3A87EE054752005AC50B79C631A
6C30DCDA1F19B1D71EDEFDD7E0CB948337AEEC1F434EDD7B2CD2BD2EA52FE4A9B8AD3AD499A4B625E99B6B00609260FF4F214918F767
90AB61069C8FF2BAE9B4E992326BB5F357E85D1BCD8C1DAB95049549F3352D96E3496DDD77E3FB494BB4AC5507A98F95B3B423BB4C6D
45F0F6A9B29530B4FD4C558C274A57147C829DCD7392D3164A060C8C50D18F1E09BE17A1E621CAFD83E510BC83A50AC46728F6731414
3D4676C387148921344DAF0F450CA649A1BABB9CC5B1338329850203010001A3423040300F0603551D130101FF040530030101FF300E
0603551D0F0101FF040403020186301D0603551D0E041604144E2254201895E6E36EE60FFAFAB912ED06178F39300D06092A864886F7
0D01010B05000382010100606728946F0E4863EB31DDEA6718D5897D3CC58B4A7FE9BEDB2B17DFB05F73772A3213398167428423F245
6735EC88BFF88FB0610C34A4AE204C84C6DBF835E176D9DFA642BBC74408867F3674245ADA6C0D145935BDF249DDB61FC9B30D472A3D
992FBB5CBBB5D420E1995F534615DB689BF0F330D53E31E28D849EE38ADADA963E3513A55FF0F970507047411157194EC08FAE06C495
13172F1B259F75F2B18E99A16F13B14171FE882AC84F102055D7F31445E5E044F4EA879532930EFE5346FA2C9DFF8B22B94BD90945A4
DEA4B89A58DD1B7D529F8E59438881A49E26D56FADDD0DC6377DED03921BE5775F76EE3C8DC45D565BA2D9666EB33537E532B6

--Trust certificates issued by DigiCert PKI root authority for Azure database.windows.net domains
DECLARE @CERTID int
SELECT @CERTID = CERT_ID('DigiCertPKI')
EXEC sp_certificate_add_issuer @CERTID, N'*.database.windows.net'
END
ELSE
PRINT 'Certificate DigiCertPKI already exsits.'
GO

Finally, verify all created certificates by using the following dynamic management view (DMV):

-- Run on SQL Server


SELECT * FROM sys.certificates

Create a mirroring endpoint on SQL Server


If you don't have an existing availability group, or a mirroring endpoint on SQL Server, the next step is to create
a mirroring endpoint on SQL Server and secure it with earlier generated SQL Server certificate. If you do have
an existing availability group or mirroring endpoint, go straight to the next section, Alter an existing endpoint.
To verify that you don't have an existing database mirroring endpoint created, use the following script:

-- Run on SQL Server


-- View database mirroring endpoints on SQL Server
SELECT * FROM sys.database_mirroring_endpoints WHERE type_desc = 'DATABASE_MIRRORING'

If the preceding query doesn't show an existing database mirroring endpoint, run the following script on SQL
Server to obtain name of the earlier generated SQL Server certificate.

-- Run on SQL Server


-- Show the name and the public key of generated SQL Server certificate
USE MASTER
GO
DECLARE @sqlserver_certificate_name NVARCHAR(MAX) = N'Cert_' + @@servername + N'_endpoint'
SELECT @sqlserver_certificate_name as 'SQLServerCertName'
Save SQLServerCertName from the output as you'll need it in the next step.
Use the below script to create a new database mirroring endpoint on port 5022 and secure the endpoint with
the SQL Server certificate. Replace:
<SQL_SERVER_CERTIFICATE> with the name of SQLServerCertName obtained in the previous step.

-- Run on SQL Server


-- Create a connection endpoint listener on SQL Server
USE MASTER
CREATE ENDPOINT database_mirroring_endpoint
STATE=STARTED
AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
FOR DATABASE_MIRRORING (
ROLE=ALL,
AUTHENTICATION = CERTIFICATE [<SQL_SERVER_CERTIFICATE>],
ENCRYPTION = REQUIRED ALGORITHM AES
)
GO

Validate that the mirroring endpoint was created by running the following script on SQL Server:

-- Run on SQL Server


-- View database mirroring endpoints on SQL Server
SELECT
name, type_desc, state_desc, role_desc,
connection_auth_desc, is_encryption_enabled, encryption_algorithm_desc
FROM
sys.database_mirroring_endpoints

Successfully created endpoint state_desc column should state STARTED .


A new mirroring endpoint was created with certificate authentication and AES encryption enabled.
Alter an existing endpoint

NOTE
Skip this step if you've just created a new mirroring endpoint. Use this step only if you're using existing availability groups
with an existing database mirroring endpoint.

If you're using existing availability groups for the link, or if there's an existing database mirroring endpoint, first
validate that it satisfies the following mandatory conditions for the link:
Type must be DATABASE_MIRRORING .
Connection authentication must be CERTIFICATE .
Encryption must be enabled.
Encryption algorithm must be AES .

Run the following query on SQL Server to view details for an existing database mirroring endpoint:

-- Run on SQL Server


-- View database mirroring endpoints on SQL Server
SELECT
name, type_desc, state_desc, role_desc, connection_auth_desc,
is_encryption_enabled, encryption_algorithm_desc
FROM
sys.database_mirroring_endpoints
If the output shows that the existing DATABASE_MIRRORING endpoint connection_auth_desc isn't CERTIFICATE , or
encryption_algorthm_desc isn't AES , the endpoint needs to be altered to meet the requirements.

On SQL Server, the same database mirroring endpoint is used for both availability groups and distributed
availability groups. If your connection_auth_desc endpoint is NTLM (Windows authentication) or KERBEROS , and
you need Windows authentication for an existing availability group, it's possible to alter the endpoint to use
multiple authentication methods by switching the authentication option to NEGOTIATE CERTIFICATE . This change
will allow the existing availability group to use Windows authentication, while using certificate authentication for
SQL Managed Instance.
Similarly, if encryption doesn't include AES and you need RC4 encryption, it's possible to alter the endpoint to
use both algorithms. For details about possible options for altering endpoints, see the documentation page for
sys.database_mirroring_endpoints.
The following script is an example of how to alter your existing database mirroring endpoint on SQL Server.
Replace:
<YourExistingEndpointName> with your existing endpoint name.
<SQLServerCertName> with the name of the generated SQL Server certificate (obtained in one of the earlier
steps above).
Depending on your specific configuration, you might need to customize the script further. You can also use
SELECT * FROM sys.certificates to get the name of the created certificate on SQL Server.

-- Run on SQL Server


-- Alter the existing database mirroring endpoint to use CERTIFICATE for authentication and AES for
encryption
USE MASTER
ALTER ENDPOINT [<YourExistingEndpointName>]
STATE=STARTED
AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
FOR DATABASE_MIRRORING (
ROLE=ALL,
AUTHENTICATION = WINDOWS NEGOTIATE CERTIFICATE [<SQLServerCertName>],
ENCRYPTION = REQUIRED ALGORITHM AES
)
GO

After you run the ALTER endpoint query and set the dual authentication mode to Windows and certificate, use
this query again on SQL Server to show details for the database mirroring endpoint:

-- Run on SQL Server


-- View database mirroring endpoints on SQL Server
SELECT
name, type_desc, state_desc, role_desc, connection_auth_desc,
is_encryption_enabled, encryption_algorithm_desc
FROM
sys.database_mirroring_endpoints

You've successfully modified your database mirroring endpoint for a SQL Managed Instance link.

Create an availability group on SQL Server


If you don't have an existing availability group, the next step is to create one on SQL Server. Create an availability
group with the following parameters for a link:
SQL Server name
Database name
A failover mode of MANUAL
A seeding mode of AUTOMATIC

First, find out your SQL Server name by running the following T-SQL statement:

-- Run on SQL Server


SELECT @@SERVERNAME AS SQLServerName

Then, use the following script to create the availability group on SQL Server. Replace:
<AGName> with the name of your availability group. For multiple databases, you'll need to create multiple
availability groups. A Managed Instance link requires one database per availability group. Consider naming
each availability group so that its name reflects the corresponding database - for example, AG_<db_name> .
<DatabaseName> with the name of database that you want to replicate.
<SQLServerName> with the name of your SQL Server instance obtained in the previous step.
<SQLServerIP> with the SQL Server IP address. You can use a resolvable SQL Server host machine name as
an alternative, but you need to make sure that the name is resolvable from the SQL Managed Instance virtual
network.

-- Run on SQL Server


-- Create the primary availability group on SQL Server
USE MASTER
CREATE AVAILABILITY GROUP [<AGName>]
WITH (CLUSTER_TYPE = NONE) -- <- Delete this line for SQL Server 2016 only. Leave as-is for all higher
versions.
FOR database [<DatabaseName>]
REPLICA ON
N'<SQLServerName>' WITH
(
ENDPOINT_URL = 'TCP://<SQLServerIP>:5022',
AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO

IMPORTANT
For SQL Server 2016, delete WITH (CLUSTER_TYPE = NONE) from the above T-SQL statement. Leave as-is for all higher
SQL Server versions.

Next, create distributed availability group on SQL Server. In the following code, replace:
<DAGName> with the name of your distributed availability group. When you're replicating several databases,
you need one availability group and one distributed availability group for each database. Consider naming
each item accordingly - for example, DAG_<db_name> .
<AGName> with the name of the availability group that you created in the previous step.
<SQLServerIP> with the IP address of SQL Server from the previous step. You can use a resolvable SQL
Server host machine name as an alternative, but make sure that the name is resolvable from the SQL
Managed Instance virtual network (requires configuration of custom Azure DNS for managed instance's
subnet).
<ManagedInstanceName> with the short name of your managed instance.
<ManagedInstnaceFQDN> with the fully qualified domain name of your managed instance.
-- Run on SQL Server
-- Create a distributed availability group for the availability group and database
-- ManagedInstanceName example: 'sqlmi1'
-- ManagedInstanceFQDN example: 'sqlmi1.73d19f36a420a.database.windows.net'
USE MASTER
CREATE AVAILABILITY GROUP [<DAGName>]
WITH (DISTRIBUTED)
AVAILABILITY GROUP ON
N'<AGName>' WITH
(
LISTENER_URL = 'TCP://<SQLServerIP>:5022',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC,
SESSION_TIMEOUT = 20
),
N'<ManagedInstanceName>' WITH
(
LISTENER_URL = 'tcp://<ManagedInstanceFQDN>:5022;Server=[<ManagedInstanceName>]',
AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,
FAILOVER_MODE = MANUAL,
SEEDING_MODE = AUTOMATIC
);
GO

Verify availability groups


Use the following script to list all availability groups and distributed availability groups on the SQL Server
instance. At this point, the state of your availability group needs to be connected , and the state of your
distributed availability groups needs to be disconnected . The state of the distributed availability group will
move to connected only when it has been joined with SQL Managed Instance.

-- Run on SQL Server


-- This will show that the availability group and distributed availability group have been created on SQL
Server.
SELECT * FROM sys.availability_groups

Alternatively, you can use SSMS Object Explorer to find availability groups and distributed availability groups.
Expand the Always On High Availability folder and then the Availability Groups folder.

Create a link
The final step of the setup process is to create the link.
For simplicity of the process, sign in to the Azure portal and run the following PowerShell script from Azure
Cloud Shell. Replace:
<ManagedInstanceName>with the short name of your managed instance.
<AGName> with the name of the availability group created on SQL Server.
<DAGName> with the name of the distributed availability group created on SQL Server.
<DatabaseName> with the database replicated in the availability group on SQL Server.
<SQLServerIP> with the IP address of your SQL Server. The provided IP address must be accessible by
managed instance.
# Run in Azure Cloud Shell
# =============================================================================
# POWERSHELL SCRIPT FOR CREATING MANAGED INSTANCE LINK
# Instructs Managed Instance to join distributed availability group on SQL Server
# ===== Enter user variables here ====

# Enter your managed instance name – for example, "sqlmi1"


$ManagedInstanceName = "<ManagedInstanceName>"

# Enter the availability group name that was created on SQL Server
$AGName = "<AGName>"

# Enter the distributed availability group name that was created on SQL Server
$DAGName = "<DAGName>"

# Enter the database name that was placed in the availability group for replication
$DatabaseName = "<DatabaseName>"

# Enter the SQL Server IP


$SQLServerIP = "<SQLServerIP>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# Build properly formatted connection endpoint


$SourceIP = "TCP://" + $SQLServerIP + ":5022"

# Create link on managed instance. Join distributed availability group on SQL Server.
New-AzSqlInstanceLink -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -Name $DAGName -
PrimaryAvailabilityGroupName $AGName -SecondaryAvailabilityGroupName $ManagedInstanceName -TargetDatabase
$DatabaseName -SourceEndpoint $SourceIP

The result of this operation will be a time stamp of the successful execution of the request to create a link.
In case this is needed, to see all links on a managed instance, use Get-AzSqlInstanceLink PowerShell command in
Azure Cloud Shell. To remove an existing link, use Remove-AzSqlInstanceLink PowerShell command in Azure
Cloud Shell.

NOTE
The link feature supports one database per link. To replicate multiplate databases on an instance, create a link for each
individual database. For example, to replicate 10 databases to SQL Managed Instance, create 10 individual links.

Consider the following:


The link currently supports replicating one database per availability group. You can replicate multiple
databases to SQL Managed Instance by setting up multiple links.
Collation between SQL Server and SQL Managed Instance should be the same. A mismatch in collation could
cause a mismatch in server name casing and prevent a successful connection from SQL Server to SQL
Managed Instance.
Error 1475 indicates that you need to start a new backup chain by creating a full backup without the
COPY ONLY option.

Verify the link


To verify that connection has been made between SQL Managed Instance and SQL Server, run the following
query on SQL Server. The connection will not be instantaneous. It can take up to a minute for the DMV to start
showing a successful connection. Keep refreshing the DMV until the connection appears as CONNECTED for the
SQL Managed Instance replica.

-- Run on SQL Server


SELECT
r.replica_server_name AS [Replica],
r.endpoint_url AS [Endpoint],
rs.connected_state_desc AS [Connected state],
rs.last_connect_error_description AS [Last connection error],
rs.last_connect_error_number AS [Last connection error No],
rs.last_connect_error_timestamp AS [Last error timestamp]
FROM
sys.dm_hadr_availability_replica_states rs
JOIN sys.availability_replicas r
ON rs.replica_id = r.replica_id

After the connection is established, the Managed Instance Databases view in SSMS initially shows the
replicated databases in a Restoring state as the initial seeding phase moves and restores the full backup of the
database. After the database is restored, replication has to catch up to bring the two databases to a synchronized
state. The database will no longer be in Restoring after the initial seeding finishes. Seeding small databases
might be fast enough that you won't see the initial Restoring state in SSMS.

IMPORTANT
The link won't work unless network connectivity exists between SQL Server and SQL Managed Instance. To
troubleshoot network connectivity, follow the steps in Test bidirectional network connectivity.
Take regular backups of the log file on SQL Server. If the used log space reaches 100 percent, replication to SQL
Managed Instance stops until space use is reduced. We highly recommend that you automate log backups by setting
up a daily job. For details, see Back up log files on SQL Server.

Next steps
For more information on the link feature, see the following resources:
Managed Instance link – connecting SQL Server to Azure reimagined
Prepare your environment for a Managed Instance link
Use a Managed Instance link with scripts to migrate a database
Use a Managed Instance link via SSMS to replicate a database
Use a Managed Instance link via SSMS to migrate a database
Fail over a database by using the link in SSMS -
Azure SQL Managed Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to fail over a database from SQL Server to Azure SQL Managed Instance by using
the link feature in SQL Server Management Studio (SSMS).
Failing over your database from SQL Server to SQL Managed Instance breaks the link between the two
databases. It stops replication and leaves both databases in an independent state, ready for individual read/write
workloads.

NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview.

Prerequisites
To fail over your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
SQL Server Management Studio v18.12 or later.
An environment that's prepared for replication.
Setup of the link feature and replication of your database to your managed instance in Azure.

Fail over a database


In the following steps, you use the Failover database to Managed Instance wizard in SSMS to fail over your
database from SQL Server to SQL Managed Instance. The wizard takes you through failing over your database,
breaking the link between the two instances in the process.
Cau t i on

If you're performing a planned manual failover, stop the workload on the source SQL Server database to allow
the SQL Managed Instance replicated database to completely catch up and failover without data loss. If you're
performing a forced failover, you might lose data.
1. Open SSMS and connect to your SQL Server instance.
2. In Object Explorer, right-click your database, hover over Azure SQL Managed Instance link , and select
Failover database to open the Failover database to Managed Instance wizard.
3. On the Introduction page of the Failover database to Managed Instance wizard, select Next .

4. On the Log in to Azure page, select Sign-in to provide your credentials and sign in to your Azure
account. Select the subscription that's hosting SQL Managed Instance from the dropdown list, and then
select Next .

5. On the Failover Type page, choose the type of failover you're performing. Select the box to confirm that
you've stopped the workload for a planned failover, or you understand that you might lose data if using a
forced failover. Select Next .

6. On the Clean-up (optional) page, choose to drop the availability group if you created it solely for the
purpose of migrating your database to Azure and you no longer need it. If you want to keep the
availability group, leave the boxes cleared. Select Next .
7. On the Summar y page, review the actions that will be performed for your failover. Optionally, select
Script to create a script that you can run at a later time. When you're ready to proceed with the failover,
select Finish .

8. The Executing actions page displays the progress of each action.


9. After all steps finish, the Results page shows check marks next to the successfully completed actions. You
can now close the window.

On successful execution of the failover process, the link is dropped and no longer exists. The source SQL Server
database and the target SQL Managed Instance database can both execute a read/write workload. They're
completely independent. Repoint your application connection string to managed instance to complete the
migration process.

IMPORTANT
On successful failover, manually repoint your application(s) connection string to managed instance FQDN to continue
running in Azure, and to complete the migration process.

View the failed-over database


You can validate that the link has been dropped by reviewing the database on SQL Server.
Then, review the database on SQL Managed Instance.

Next steps
To learn more, see Link feature for Azure SQL Managed Instance.
Failover (migrate) a database with a link via T-SQL
and PowerShell scripts - Azure SQL Managed
Instance
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to use Transact-SQL (T-SQL) and PowerShell scripts and a Managed Instance link to
fail over (migrate) your database from SQL Server to SQL Managed Instance.

NOTE
The link is a feature of Azure SQL Managed Instance and is currently in preview. You can also use a SQL Server
Management Studio (SSMS) wizard to failover a database with the link.

Prerequisites
To replicate your databases to SQL Managed Instance, you need the following prerequisites:
An active Azure subscription. If you don't have one, create a free account.
Supported version of SQL Server with required service update installed.
Azure SQL Managed Instance. Get started if you don't have it.
PowerShell module Az.SQL 3.9.0, or higher
A properly prepared environment.

Database failover
Database failover from SQL Server to SQL Managed Instance breaks the link between the two databases.
Failover stops replication and leaves both databases in an independent state, ready for individual read/write
workloads.
To start migrating your database to SQL Managed Instance, first stop any application workloads on SQL Server
during your maintenance hours. This enables SQL Managed Instance to catch up with database replication and
migrate to Azure while mitigating data loss.
While the primary database is a part of an Always On availability group, you can't set it to read-only mode. You
need to ensure that your applications aren't committing transactions to SQL Server prior to the failover.

Switch the replication mode


The replication between SQL Server and SQL Managed Instance is asynchronous by default. Before you migrate
your database to Azure, switch the link to synchronous mode. Synchronous replication across large network
distances might slow down transactions on the primary SQL Server instance.
Switching from async to sync mode requires a replication mode change on SQL Managed Instance and SQL
Server.
Switch replication mode (SQL Managed Instance )
Use PowerShell with the installed Az.Sql module 3.9.0, or higher. Or preferably, use Azure Cloud Shell online
from the web browser to run the commands, because it's always updated with the latest module versions.
First, ensure that you're logged in to Azure and that you've selected the subscription where your managed
instance is hosted. Selecting the proper subscription is especially important if you have more than one Azure
subscription on your account. Replace:
<SubscriptionID> with your Azure subscription ID.

# Run in Azure Cloud Shell (select PowerShell console)

# Enter your Azure subscription ID


$SubscriptionID = "<SubscriptionID>"

# Login to Azure and select subscription ID


if ((Get-AzContext ) -eq $null)
{
echo "Logging to Azure subscription"
Login-AzAccount
}
Select-AzSubscription -SubscriptionName $SubscriptionID

Ensure that you know the name of the link you would like to fail over. Use the below script in Azure Cloud Shell
to list all active links on managed instance. Replace:
<ManagedInstanceName> with the short name of your managed instance.

# Run in Azure Cloud Shell


# =============================================================================
# POWERSHELL SCRIPT TO LIST ALL LINKS ON MANAGED INSTANCE
# ===== Enter user variables here ====

# Enter your managed instance name – for example, "sqlmi1"


$ManagedInstanceName = "<ManagedInstanceName>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# List all links on the specified managed instance


Get-AzSqlInstanceLink -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName

From the output of the above script, record the Name property of the link you'd like to fail over.
Then, switch the replication mode from async to sync on managed instance for the link identified by running the
below script in Azure Cloud Shell. Replace:
<ManagedInstanceName> with the short name of your managed instance.
<DAGName> with the name of the link you found out on the previous step (the Name property from the
previous step).
# Run in Azure Cloud Shell
# =============================================================================
# POWERSHELL SCRIPT TO SWITCH LINK REPLICATION MODE (ASYNC\SYNC)
# ===== Enter user variables here ====

# Enter your managed instance name – for example, "sqlmi1"


$ManagedInstanceName = "<ManagedInstanceName>"
$LinkName = "<DAGName>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# Update replication mode of the specified link


Update-AzSqlInstanceLink -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -Name
$LinkName -ReplicationMode "Sync"

Executing the above command will indicate success by displaying summary of the operation, with the property
ReplicationMode shown as Sync .

In case you need to revert this operation, execute the above script to switch the replication mode, replacing the
Sync string in the -ReplicationMode to Async .

Switch replication mode (SQL Server)


Use the following T-SQL script on SQL Server to change the replication mode of the distributed availability
group on SQL Server from async to sync. Replace:
<DAGName> with the name of the distributed availability group (used to create the link).
<AGName> with the name of the availability group created on SQL Server (used to create the link).
<ManagedInstanceName> with the name of your managed instance.

-- Run on SQL Server


-- Sets the distributed availability group to a synchronous commit.
-- ManagedInstanceName example: 'sqlmi1'
USE master
GO
ALTER AVAILABILITY GROUP [<DAGName>]
MODIFY
AVAILABILITY GROUP ON
'<AGName>' WITH
(AVAILABILITY_MODE = SYNCHRONOUS_COMMIT),
'<ManagedInstanceName>' WITH
(AVAILABILITY_MODE = SYNCHRONOUS_COMMIT);

To confirm that you've changed the link's replication mode successfully, use the following dynamic management
view. Results indicate the SYNCHRONOUS_COMIT state.
-- Run on SQL Server
-- Verifies the state of the distributed availability group
SELECT
ag.name, ag.is_distributed, ar.replica_server_name,
ar.availability_mode_desc, ars.connected_state_desc, ars.role_desc,
ars.operational_state_desc, ars.synchronization_health_desc
FROM
sys.availability_groups ag
join sys.availability_replicas ar
on ag.group_id=ar.group_id
left join sys.dm_hadr_availability_replica_states ars
on ars.replica_id=ar.replica_id
WHERE
ag.is_distributed=1

Now that you've switched both SQL Managed Instance and SQL Server to sync mode, the replication between
the two entities is synchronous. If you need to reverse this state, follow the same steps and set the async state
for both SQL Server and SQL Managed Instance.

Check LSN values on both SQL Server and SQL Managed Instance
To complete the migration, confirm that replication has finished. For this, ensure that the log sequence numbers
(LSNs) indicating the log records written for both SQL Server and SQL Managed Instance are the same.
Initially, it's expected that the SQL Server LSN will be higher than the SQL Managed Instance LSN. Network
latency might cause SQL Managed Instance to lag somewhat behind the primary SQL Server instance. Because
the workload has been stopped on SQL Server, you should expect the LSNs to match and stop changing after
some time.
Use the following T-SQL query on SQL Server to read the LSN of the last recorded transaction log. Replace:
<DatabaseName> with your database name and look for the last hardened LSN number.

-- Run on SQL Server


-- Obtain the last hardened LSN for the database on SQL Server.
SELECT
ag.name AS [Replication group],
db.name AS [Database name],
drs.database_id AS [Database ID],
drs.group_id,
drs.replica_id,
drs.synchronization_state_desc AS [Sync state],
drs.end_of_log_lsn AS [End of log LSN],
drs.last_hardened_lsn AS [Last hardened LSN]
FROM
sys.dm_hadr_database_replica_states drs
inner join sys.databases db on db.database_id = drs.database_id
inner join sys.availability_groups ag on drs.group_id = ag.group_id
WHERE
ag.is_distributed = 1 and db.name = '<DatabaseName>'

Use the following T-SQL query on SQL Managed Instance to read the last hardened LSN for your database.
Replace <DatabaseName> with your database name.
This query will work on a General Purpose managed instance. For a Business Critical managed instance, you
need to uncomment and drs.is_primary_replica = 1 at the end of the script. On Business Critical, this filter
ensures that only primary replica details are read.
-- Run on a managed instance
-- Obtain the LSN for the database on SQL Managed Instance.
SELECT
db.name AS [Database name],
drs.database_id AS [Database ID],
drs.group_id,
drs.replica_id,
drs.synchronization_state_desc AS [Sync state],
drs.end_of_log_lsn AS [End of log LSN],
drs.last_hardened_lsn AS [Last hardened LSN]
FROM
sys.dm_hadr_database_replica_states drs
inner join sys.databases db on db.database_id = drs.database_id
WHERE
db.name = '<DatabaseName>'
-- for Business Critical, add the following as well
-- AND drs.is_primary_replica = 1

Alternatively, you could also use Azure Cloud Shell PowerShell command Get-AzSqlInstanceLink to fetch the
LastHardenedLsn property for your link on the managed instance which will provide the same information as
the above T-SQL query.
Verify once again that your workload is stopped on SQL Server. Check that LSNs on both SQL Server and SQL
Managed Instance match, and that they remain matched and unchanged for some time. Stable LSNs on both
instances indicate that the tail log has been replicated to SQL Managed Instance and the workload is effectively
stopped.

Start database failover and migration to Azure


Run the below script in Azure Cloud Shell to finalize your migration to Azure. The script breaks the link and ends
replication to SQL Managed Instance. The replicated database becomes read/write on the managed instance.
Replace:
<ManagedInstanceName> with the name of your managed instance.
<DAGName> with the name of the link you're failing over (output of the property Name from
Get-AzSqlInstanceLink command executed earlier above).

# Run in Azure Cloud Shell


# =============================================================================
# POWERSHELL SCRIPT TO FAILOVER AND MIGRATE DATABASE TO AZURE
# ===== Enter user variables here ====

# Enter your managed instance name – for example, "sqlmi1"


$ManagedInstanceName = "<ManagedInstanceName>"
$LinkName = "<DAGName>"

# ==== Do not customize the below ====

# Find out the resource group name


$ResourceGroup = (Get-AzSqlInstance -InstanceName $ManagedInstanceName).ResourceGroupName

# Failover the specified link


Remove-AzSqlInstanceLink -ResourceGroupName $ResourceGroup -InstanceName $ManagedInstanceName -Name
$LinkName -Force

On successful execution of the failover process, the link is dropped and no longer exists. The source SQL Server
database and the target SQL Managed Instance database can both execute a read/write workload. They're
completely independent. Repoint your application connection string to managed instance to complete the
migration process.
IMPORTANT
On successful failover, manually repoint your application(s) connection string to managed instance FQDN to continue
running in Azure, and to complete the migration process.

Clean up availability groups


After you break the link and migrate a database to Azure SQL Managed Instance, consider cleaning up the
availability group and distributed availability group resources from SQL Server if they're no longer necessary.
In the following code, replace:
<DAGName> with the name of the distributed availability group on SQL Server (used to create the link).
<AGName> with the name of the availability group on SQL Server (used to create the link).

-- Run on SQL Server


USE MASTER
GO
DROP AVAILABILITY GROUP <DAGName>
GO
DROP AVAILABILITY GROUP <AGName>
GO

With this step, you've finished the migration of the database from SQL Server to SQL Managed Instance.

Next steps
For more information on the link feature, see the following resources:
Managed Instance link – connecting SQL Server to Azure reimagined
Prepare your environment for Managed Instance link
Use a Managed Instance link with scripts to replicate a database
Use a Managed Instance link via SSMS to replicate a database
Use a Managed Instance link via SSMS to migrate a database
Best practices with link feature for Azure SQL
Managed Instance (preview)
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article outlines best practices when using the link feature for Azure SQL Managed Instance. The link feature
for Azure SQL ManagedInstance connects your SQL Servers hosted anywhere to SQL Managed Instance,
providing near real-time data replication to the cloud.

NOTE
The link feature for Azure SQL ManagedInstance is currently in preview.

Take log backups regularly


The link feature replicates data using the Distributed availability groups concept based on the Always On
availability groups technology stack. Data replication with distributed availability groups is based on replicating
transaction log records. No transaction log records can be truncated from the database on the primary instance
until they're replicated to the database on the secondary instance. If transaction log record replication is slow or
blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed
depends on the intensity of workload and the network speed. If there's a prolonged network connection outage
and heavy workload on primary instance, the log file may take all available storage space.
To minimize the risk of running out of space on your primary instance due to log file growth, make sure to take
database log backups regularly . By taking log backups regularly, you make your database more resilient to
unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section.
Replace the placeholders in the sample script with name of your database, name and path of the backup file, and
the description.
To back up your transaction log, use the following sample Transact-SQL (T-SQL) script on SQL Server:

-- Execute on SQL Server


USE [<DatabaseName>]
--Set current database inside job step or script
--Check that you are executing the script on the primary instance
if (SELECT role
FROM sys.dm_hadr_availability_replica_states AS a
JOIN sys.availability_replicas AS b
ON b.replica_id = a.replica_id
WHERE b.replica_server_name = @@SERVERNAME) = 1
BEGIN
-- Take log backup
BACKUP LOG [<DatabaseName>]
TO DISK = N'<DiskPathandFileName>'
WITH NOFORMAT, NOINIT,
NAME = N'<Description>', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1
END

Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database on SQL
Server:

-- Execute on SQL Server


DBCC SQLPERF(LOGSPACE);

The query output looks like the following example below for sample database tpcc :

In this example, the database has used 76% of the available log, with an absolute log file size of approximately
27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication
that you should take a log backup to truncate the log file and free up some space.

Add startup trace flags


There are two trace flags ( -T1800 and -T9567 ) that, when added as start up parameters, can optimize the
performance of data replication through the link. See Enable startup trace flags to learn more.

Next steps
To get started with the link feature, prepare your environment for replication.
For more information on the link feature, see the following articles:
Managed Instance link – overview
Managed Instance link – connecting SQL Server to Azure reimagined
Restore a database in Azure SQL Managed Instance
to a previous point in time
7/12/2022 • 6 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


Use point-in-time restore (PITR) to create a database as a copy of another database from some time in the past.
This article describes how to do a point-in-time restore of a database in Azure SQL Managed Instance.
Point-in-time restore is useful in recovery scenarios, such as incidents caused by errors, incorrectly loaded data,
or deletion of crucial data. You can also use it simply for testing or auditing. Backup files are kept for 7 to 35
days, depending on your database settings.
Point-in-time restore can restore a database:
from an existing database.
from a deleted database.
to the same SQL Managed Instance, or to another SQL Managed Instance.

Limitations
Point-in-time restore to SQL Managed Instance has the following limitations:
When you're restoring from one instance of SQL Managed Instance to another, both instances must be in the
same subscription and region. Cross-region and cross-subscription restore aren't currently supported.
Point-in-time restore of a whole SQL Managed Instance is not possible. This article explains only what's
possible: point-in-time restore of a database that's hosted on SQL Managed Instance.

WARNING
Be aware of the storage size of your SQL Managed Instance. Depending on size of the data to be restored, you might run
out of instance storage. If there isn't enough space for the restored data, use a different approach.

The following table shows point-in-time restore scenarios for SQL Managed Instance:

RESTO RE EXIST IN G
DB TO T H E SA M E RESTO RE EXIST IN G RESTO RE DRO P P ED RESTO RE DRO P P ED
IN STA N C E O F SQ L DB TO A N OT H ER SQ L DB TO SA M E SQ L DB TO A N OT H ER SQ L
M A N A GED IN STA N C E M A N A GED IN STA N C E M A N A GED IN STA N C E M A N A GED IN STA N C E

Azure por tal Yes Yes Yes Yes

Azure CLI Yes Yes No No

PowerShell Yes Yes Yes Yes

Restore an existing database


Restore an existing database to the same SQL Managed Instance using the Azure portal, PowerShell, or the
Azure CLI. To restore a database to another SQL Managed Instance, use PowerShell or the Azure CLI so you can
specify the properties for the target SQL Managed Instance and resource group. If you don't specify these
parameters, the database will be restored to the existing SQL Managed Instance by default. The Azure portal
doesn't currently support restoring to another SQL Managed Instance.
Portal
PowerShell
Azure CLI

1. Sign in to the Azure portal.


2. Go to your SQL Managed Instance and select the database that you want to restore.
3. Select Restore on the database page:

4. On the Restore page, select the point for the date and time that you want to restore the database to.
5. Select Confirm to restore your database. This action starts the restore process, which creates a new
database and populates it with data from the original database at the specified point in time. For more
information about the recovery process, see Recovery time.

Restore a deleted database


Restoring a deleted database can be done by using PowerShell or Azure portal. To restore a deleted database to
the same instance, use either the Azure portal or PowerShell. To restore a deleted database to another instance,
use PowerShell.
Portal
To recover a managed database using the Azure portal, open the SQL Managed Instance overview page, and
select Deleted databases . Choose a deleted database that you want to restore, and type the name for the new
database that will be created with data restored from the backup.
PowerShell
To restore a database to the same instance, update the parameter values and then run the following PowerShell
command:

$subscriptionId = "<Subscription ID>"


Get-AzSubscription -SubscriptionId $subscriptionId
Select-AzSubscription -SubscriptionId $subscriptionId

$resourceGroupName = "<Resource group name>"


$managedInstanceName = "<SQL Managed Instance name>"
$deletedDatabaseName = "<Source database name>"
$targetDatabaseName = "<target database name>"

$deletedDatabase = Get-AzSqlDeletedInstanceDatabaseBackup -ResourceGroupName $resourceGroupName `


-InstanceName $managedInstanceName -DatabaseName $deletedDatabaseName

Restore-AzSqlinstanceDatabase -FromPointInTimeBackup -Name $deletedDatabase.Name `


-InstanceName $deletedDatabase.ManagedInstanceName `
-ResourceGroupName $deletedDatabase.ResourceGroupName `
-DeletionDate $deletedDatabase.DeletionDate `
-PointInTime UTCDateTime `
-TargetInstanceDatabaseName $targetDatabaseName

To restore the database to another SQL Managed Instance, also specify the names of the target resource group
and target SQL Managed Instance:

$targetResourceGroupName = "<Resource group of target SQL Managed Instance>"


$targetInstanceName = "<Target SQL Managed Instance name>"

Restore-AzSqlinstanceDatabase -FromPointInTimeBackup -Name $deletedDatabase.Name `


-InstanceName $deletedDatabase.ManagedInstanceName `
-ResourceGroupName $deletedDatabase.ResourceGroupName `
-DeletionDate $deletedDatabase.DeletionDate `
-PointInTime UTCDateTime `
-TargetInstanceDatabaseName $targetDatabaseName `
-TargetResourceGroupName $targetResourceGroupName `
-TargetInstanceName $targetInstanceName

Overwrite an existing database


To overwrite an existing database, you must:
1. Drop the existing database that you want to overwrite.
2. Rename the point-in-time-restored database to the name of the database that you dropped.
Drop the original database
You can drop the database by using the Azure portal, PowerShell, or the Azure CLI.
You can also drop the database by connecting to the SQL Managed Instance directly, starting SQL Server
Management Studio (SSMS), and then running the following Transact-SQL (T-SQL) command:

DROP DATABASE WorldWideImporters;

Use one of the following methods to connect to your database in the SQL Managed Instance:
SSMS/Azure Data Studio via an Azure virtual machine
Point-to-site
Public endpoint

Portal
PowerShell
Azure CLI

In the Azure portal, select the database from the SQL Managed Instance, and then select Delete .

Alter the new database name to match the original database name
Connect directly to the SQL Managed Instance and start SQL Server Management Studio. Then, run the
following Transact-SQL (T-SQL) query. The query will change the name of the restored database to that of the
dropped database that you intend to overwrite.

ALTER DATABASE WorldWideImportersPITR MODIFY NAME = WorldWideImporters;

Use one of the following methods to connect to your database in SQL Managed Instance:
Azure virtual machine
Point-to-site
Public endpoint

Next steps
Learn about automated backups.
Monitor backup activity for Azure SQL Managed
Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you to configure extended event (XEvent) sessions to monitor backup activity for Azure SQL
Managed Instance.

Overview
Azure SQL Managed Instance emits events (also known as Extended Events or XEvents) during backup activity
for the purpose of reporting. Configure an XEvent session to track information such as backup status, backup
type, size, time, and location within the msdb database. This information can be integrated with backup
monitoring software and also used for the purpose of Enterprise Audit.
Enterprise Audits may require proof of successful backups, time of backup, and duration of the backup.

Configure XEvent session


Use the extended event backup_restore_progress_trace to record the progress of your SQL Managed Instance
back up. Modify the XEvent sessions as needed to track the information you're interested in for your business.
These T-SQL snippets store the XEvent sessions in the ring buffer, but it's also possible to write to Azure Blob
Storage. XEvent sessions storing data in the ring buffer have a limit of about 1000 messages so should only be
used to track recent activity. Additionally, ring buffer data is lost upon failover. As such, for a historical record of
backups, write to an event file instead.
Simple tracking
Configure a simple XEvent session to capture simple events about complete full backups. This script collects the
name of the database, the total number of bytes processed, and the time the backup completed.
Use Transact-SQL (T-SQL) to configure the simple XEvent session:

CREATE EVENT SESSION [Simple backup trace] ON SERVER


ADD EVENT sqlserver.backup_restore_progress_trace(
WHERE operation_type = 0
AND trace_message LIKE '%100 percent%')
ADD TARGET package0.ring_buffer
WITH(STARTUP_STATE=ON)
GO
ALTER EVENT SESSION [Simple backup trace] ON SERVER
STATE = start;

Verbose tracking
Configure a verbose XEvent session to track greater details about your backup activity. This script captures start
and finish of both full, differential and log backups. Since this script is more verbose, it fills up the ring buffer
faster, so entries may recycle faster than with the simple script.
Use Transact-SQL (T-SQL) to configure the verbose XEvent session:
CREATE EVENT SESSION [Verbose backup trace] ON SERVER
ADD EVENT sqlserver.backup_restore_progress_trace(
WHERE (
[operation_type]=(0) AND (
[trace_message] like '%100 percent%' OR
[trace_message] like '%BACKUP DATABASE%' OR [trace_message] like '%BACKUP LOG%'))
)
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)

ALTER EVENT SESSION [Verbose backup trace] ON SERVER


STATE = start;

Monitor backup progress


After the XEvent session is created, you can use Transact-SQL (T-SQL) to query ring buffer results and monitor
the progress of the backup. Once the XEvent starts, it collects all backup events so entries are added to the
session roughly every 5-10 minutes.
Simple tracking
The following Transact-SQL (T-SQL) code queries the simple XEvent session and returns the name of the
database, the total number of bytes processed, and the time the backup completed:

WITH
a AS (SELECT xed = CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Backup trace'),
b AS(SELECT
d.n.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
ISNULL(db.name, d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)')) AS database_name,
d.n.value('(data[@name="trace_message"]/value)[1]', 'varchar(4000)') AS trace_message
FROM a
CROSS APPLY xed.nodes('/RingBufferTarget/event') d(n)
LEFT JOIN master.sys.databases db
ON db.physical_database_name = d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)'))
SELECT * FROM b

The following screenshot shows an example of the output of the above query:
In this example, five databases were automatically backed up over the course of 2 hours and 30 minutes, and
there are 130 entries in the XEvent session.
Verbose tracking
The following Transact-SQL (T-SQL) code queries the verbose XEvent session and returns the name of the
database, as well as the start and finish of both full, differential and log backups.

WITH
a AS (SELECT xed = CAST(xet.target_data AS xml)
FROM sys.dm_xe_session_targets AS xet
JOIN sys.dm_xe_sessions AS xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'Verbose backup trace'),
b AS(SELECT
d.n.value('(@timestamp)[1]', 'datetime2') AS [timestamp],
ISNULL(db.name, d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)')) AS database_name,
d.n.value('(data[@name="trace_message"]/value)[1]', 'varchar(4000)') AS trace_message
FROM a
CROSS APPLY xed.nodes('/RingBufferTarget/event') d(n)
LEFT JOIN master.sys.databases db
ON db.physical_database_name = d.n.value('(data[@name="database_name"]/value)[1]', 'varchar(200)'))
SELECT * FROM b

The following screenshot shows an example of a full backup in the XEvent session:

The following screenshot shows an example of an output of a differential backup in the XEvent session:
Next steps
Once your backup has completed, you can then restore to a point in time or configure a long-term retention
policy.
To learn more, see automated backups.
Configure an auto-failover group for Azure SQL
Managed Instance
7/12/2022 • 11 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article teaches you how to configure an auto-failover group for Azure SQL Managed Instance using the
Azure portal and Azure PowerShell. For an end-to-end experience, review the Auto-failover group tutorial.

NOTE
This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see Configure auto-
failover groups in SQL Database.

Prerequisites
Consider the following prerequisites:
The secondary managed instance must be empty that is, contain no user databases.
The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.
While not required, it's strongly recommended that two instances have equal compute size, to make sure that
secondary instance can sustainably process the changes being replicated from the primary instance,
including the periods of peak activity.
The IP address range(s) of the virtual network hosting the primary instance must not overlap with IP address
range(s) of the virtual network hosting the secondary instance.
Network Security Groups (NSG) rules on subnet hosting instance must have port 5022 and the port range
11000-11999 open for both inbound and outbound for connections from and to the subnet hosting the
other managed instance. This applies to both subnets, hosting primary and secondary instance.
The secondary SQL Managed Instance is configured during its creation with the correct DNS zone ID. It's
accomplished by passing the primary instance's zone ID as the value of DnsZonePartner parameter when
creating the secondary instance. If not passed as a parameter, the zone ID is generated as a random string
when the first instance is created in each VNet and the same ID is assigned to all other instances in the same
subnet. Once assigned, the DNS zone can't be modified.
The collation and time zone of the secondary managed instance must match that of the primary managed
instance.
Managed instances should be deployed in paired regions for performance reasons. Managed instances
residing in geo-paired regions benefit from significantly higher geo-replication speed compared to unpaired
regions.

Enabling connectivity between the instances


Connectivity between the virtual network subnets hosting primary and secondary instance must be established
for uninterrupted geo-replication traffic flow. Global virtual network peering is recommended as the most
performant and robust way for establishing the connectivity. It provides a low-latency, high-bandwidth private
connection between the peered virtual networks using the Microsoft backbone infrastructure. No public
Internet, gateways, or additional encryption is required in the communication between the peered virtual
networks. To learn about alternative ways of establishing connectivity, see enabling replication traffic between
instances.
IMPORTANT
Alternative ways of providing connectivity between the instances involving additional networking devices may make
troubleshooting process in case of connectivity or replication speed issues very difficult and require active involvement of
network administrators and significantly prolong the resolution time.

Portal
PowerShell

1. In the Azure portal, go to the Vir tual network resource for your primary managed instance.
2. Select Peerings under Settings and then select + Add.

1. Enter or select values for the following settings:

SET T IN GS DESC RIP T IO N

This vir tual network

Peering link name The name for the peering must be unique within the
virtual network.

Traffic to remote virtual network Select Allow (default) to enable communication


between the two virtual networks through the default
VirtualNetwork flow. Enabling communication
between virtual networks allows resources that are
connected to either virtual network to communicate with
each other with the same bandwidth and latency as if
they were connected to the same virtual network. All
communication between resources in the two virtual
networks is over the Azure private network.

Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering

Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.

Remote vir tual network


SET T IN GS DESC RIP T IO N

Peering link name The name of the same peering to be used in the virtual
network hosting secondary instance.

Virtual network deployment model Select Resource manager .

I know my resource ID Leave this checkbox unchecked.

Subscription Select the Azure subscription of the virtual network


hosting the secondary instance that you want to peer
with.

Virtual network Select the virtual network hosting the secondary


instance that you want to peer with. If the virtual
network is listed, but grayed out, it may be because the
address space for the virtual network overlaps with the
address space for this virtual network. If virtual network
address spaces overlap, they cannot be peered.

Traffic to remote virtual network Select Allow (default)

Traffic forwarded from remote virtual network Both Allowed (default) and Block option will work for
this tutorial. For more information, see Create a peering.

Virtual network gateway or Route Server Select None . For more information about the other
options available, see Create a peering.

2. Select Add to configure the peering with the virtual network you selected. After a few seconds, select the
Refresh button and the peering status will change from Updating to Connected.

Create the failover group


Create the failover group for your managed instances by using the Azure portal or PowerShell.

Portal
PowerShell

Create the failover group for your SQL Managed Instances by using the Azure portal.
1. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL isn't in the list, select All
ser vices , then type Azure SQL in the search box. (Optional) Select the star next to Azure SQL to add it
as a favorite item to the left-hand navigation.
2. Select the primary managed instance you want to add to the failover group.
3. Under Settings , navigate to Instance Failover Groups and then choose to Add group to open the
instance failover group creation page.

4. On the Instance Failover Group page, type the name of your failover group and then choose the
secondary managed instance from the drop-down. Select Create to create your failover group.
5. Once failover group deployment is complete, you'll be taken back to the Failover group page.

Test failover
Test failover of your failover group using the Azure portal or PowerShell.

Portal
PowerShell

Test failover of your failover group using the Azure portal.


1. Navigate to your secondary managed instance within the Azure portal and select Instance Failover
Groups under settings.
2. Note managed instances in the primary and in the secondary role.
3. Select Failover and then select Yes on the warning about TDS sessions being disconnected.
4. Note managed instances in the primary and in the secondary role. If failover succeeded, the two
instances should have switched roles.

IMPORTANT
If roles didn't switch, check the connectivity between the instances and related NSG and firewall rules. Proceed with the
next step only after roles switch.

1. Go to the new secondary managed instance and select Failover once again to fail the primary instance back
to the primary role.

Locate listener endpoint


Once your failover group is configured, update the connection string for your application to the listener
endpoint. It will keep your application connected to the failover group listener, rather than the primary database,
elastic pool, or instance database. That way, you don't have to manually update the connection string every time
your database entity fails over, and traffic is routed to whichever entity is currently primary.
The listener endpoint is in the form of fog-name.database.windows.net , and is visible in the Azure portal, when
viewing the failover group:
Create group between instances in different subscriptions
You can create a failover group between SQL Managed Instances in two different subscriptions, as long as
subscriptions are associated to the same Azure Active Directory Tenant. When using PowerShell API, you can do
it by specifying the PartnerSubscriptionId parameter for the secondary SQL Managed Instance. When using
REST API, each instance ID included in the properties.managedInstancePairs parameter can have its own
Subscription ID.

IMPORTANT
Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover
groups across different subscriptions and/or resource groups, failover can't be initiated manually via portal from the
primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.

Change the secondary region


Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance
C is the new secondary instance in the third region. To make the transition, follow these steps:
1. Create instance C with same size as A and in the same DNS zone.
2. Delete the failover group between instances A and B. At this point, the logins will be failing because the SQL
aliases for the failover group listeners have been deleted and the gateway won't recognize the failover group
name. The secondary databases will be disconnected from the primaries and will become read-write
databases.
3. Create a failover group with the same name between instance A and C. Follow the instructions in failover
group with SQL Managed Instance tutorial. This is a size-of-data operation and will complete when all
databases from instance A are seeded and synchronized.
4. Delete instance B if not needed to avoid unnecessary charges.
NOTE
After step 2 and until step 3 is completed the databases in instance A will remain unprotected from a catastrophic failure
of instance A.

Change the primary region


Let's assume instance A is the primary instance, instance B is the existing secondary instance, and instance C is
the new primary instance in the third region. To make the transition, follow these steps:
1. Create instance C with same size as B and in the same DNS zone.
2. Connect to instance B and manually failover to switch the primary instance to B. Instance A will become the
new secondary instance automatically.
3. Delete the failover group between instances A and B. At this point, log in attempts using failover group
endpoints will be failing. The secondary databases on A will be disconnected from the primaries and will
become read-write databases.
4. Create a failover group with the same name between instance B and C. Follow the instructions in the failover
group with managed instance tutorial. This is a size-of-data operation and will complete when all databases
from instance A are seeded and synchronized. At this point login attempts will stop failing.
5. Manually failover to switch the C instance to primary role. Instance B will become the new secondary
instance automatically.
6. Delete instance A if not needed to avoid unnecessary charges.
Cau t i on

After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a
catastrophic failure of instance A.

IMPORTANT
When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there's a
non-zero probability of somebody else creating a failover group with the same name. Because failover group names must
be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover
group names.

Permissions
Permissions for a failover group are managed via Azure role-based access control (Azure RBAC).
Azure RBAC write access is necessary to create and manage failover groups. The SQL Managed Instance
Contributor role has all the necessary permissions to manage failover groups.
The following table lists specific permission scopes for Azure SQL Managed Instance:

A C T IO N P ERM ISSIO N SC O P E

Create failover group Azure RBAC write access Primary managed instance
Secondary managed instance

Update failover group Azure RBAC write access Failover group


All databases within the managed
instance
A C T IO N P ERM ISSIO N SC O P E

Fail over failover group Azure RBAC write access Failover group on new primary
managed instance

Next steps
For detailed steps configuring a failover group, see the Add a managed instance to a failover group tutorial
For an overview of the feature, see Auto-failover groups.
User-initiated manual failover on SQL Managed
Instance
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This article explains how to manually failover a primary node on SQL Managed Instance General Purpose (GP)
and Business Critical (BC) service tiers, and how to manually failover a secondary read-only replica node on the
BC service tier only.

NOTE
This article is not related with cross-region failovers on auto-failover groups.

When to use manual failover


High availability is a fundamental part of SQL Managed Instance platform that works transparently for your
database applications. Failovers from primary to secondary nodes in case of node degradation or fault
detection, or during regular monthly software updates are an expected occurrence for all applications using SQL
Managed Instance in Azure.
You might consider executing a manual failover on SQL Managed Instance for some of the following reasons:
Test application for failover resiliency before deploying to production
Test end-to-end systems for fault resiliency on automatic failovers
Test how failover impacts existing database sessions
Verify if a failover changes end-to-end performance because of changes in the network latency
In some cases of query performance degradations, manual failover can help mitigate the performance issue.

NOTE
Ensuring that your applications are failover resilient prior to deploying to production will help mitigate the risk of
application faults in production and will contribute to application availability for your customers. Learn more about testing
your applications for cloud readiness with Testing App Cloud Readiness for Failover Resiliency with SQL Managed Instance
video recoding.

Initiate manual failover on SQL Managed Instance


Azure RBAC permissions required
User initiating a failover will need to have one of the following Azure roles:
Subscription Owner role, or
Managed Instance Contributor role, or
Custom role with the following permission:
Microsoft.Sql/managedInstances/failover/action

Using PowerShell
The minimum version of Az.Sql needs to be v2.9.0. Consider using Azure Cloud Shell from the Azure portal that
always has the latest PowerShell version available.
As a pre-requirement, use the following PowerShell script to install required Azure modules. In addition, select
the subscription where Managed Instance you wish to failover is located.

$subscription = 'enter your subscription ID here'


Install-Module -Name Az
Import-Module Az.Accounts
Import-Module Az.Sql

Connect-AzAccount
Select-AzSubscription -SubscriptionId $subscription

Use PowerShell command Invoke-AzSqlInstanceFailover with the following example to initiate failover of the
primary node, applicable to both BC and GP service tier.

$ResourceGroup = 'enter resource group of your MI'


$ManagedInstanceName = 'enter MI name'
Invoke-AzSqlInstanceFailover -ResourceGroupName $ResourceGroup -Name $ManagedInstanceName

Use the following PS command to failover read secondary node, applicable to BC service tier only.

$ResourceGroup = 'enter resource group of your MI'


$ManagedInstanceName = 'enter MI name'
Invoke-AzSqlInstanceFailover -ResourceGroupName $ResourceGroup -Name $ManagedInstanceName -ReadableSecondary

Using CLI
Ensure to have the latest CLI scripts installed.
Use az sql mi failover CLI command with the following example to initiate failover of the primary node,
applicable to both BC and GP service tier.

az sql mi failover -g myresourcegroup -n myinstancename

Use the following CLI command to failover read secondary node, applicable to BC service tier only.

az sql mi failover -g myresourcegroup -n myinstancename --replica-type ReadableSecondary

Using REST API


For advanced users who would perhaps need to automate failovers of their SQL Managed Instances for
purposes of implementing continuous testing pipeline, or automated performance mitigators, this function can
be accomplished through initiating failover through an API call. see Managed Instances - Failover REST API for
details.
To initiate failover using REST API call, first generate the Auth Token using API client of your choice. The
generated authentication token is used as Authorization property in the header of API request and it is
mandatory.
The following code is an example of the API URI to call:

POST
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Mic
rosoft.Sql/managedInstances/{managedInstanceName}/failover?api-version=2019-06-01-preview
The following properties need to be passed in the API call:

A P I P RO P ERT Y PA RA M ET ER

subscriptionId Subscription ID to which managed instance is deployed

resourceGroupName Resource group that contains managed instance

managedInstanceName Name of managed instance

replicaType (Optional) (Primary or ReadableSecondary). These


parameters represent the type of replica to be failed over:
primary or readable secondary. If not specified, failover will
be initiated on the primary replica by default.

api-version Static value and currently needs to be “2019-06-01-preview"

API response will be one of the following two:


202 Accepted
One of the 400 request errors.
Operation status can be tracked through reviewing API responses in response headers. For more information,
see Status of asynchronous Azure operations.

Monitor the failover


To monitor the progress of user initiated failover for your BC instance, execute the following T-SQL query in
your favorite client (such is SSMS) on SQL Managed Instance. It will read the system view
sys.dm_hadr_fabric_replica_states and report replicas available on the instance. Refresh the same query after
initiating the manual failover.

SELECT DISTINCT replication_endpoint_url, fabric_replica_role_desc FROM sys.dm_hadr_fabric_replica_states

Before initiating the failover, your output will indicate the current primary replica on BC service tier containing
one primary and three secondaries in the Always On Availability Group. Upon execution of a failover, running
this query again would need to indicate a change of the primary node.
You will not be able to see the same output with GP service tier as the one above shown for BC. This is because
GP service tier is based on a single node only. You can use alternative T-SQL query showing the time SQL
process started on the node for GP service tier instance:

SELECT sqlserver_start_time, sqlserver_start_time_ms_ticks FROM sys.dm_os_sys_info

The short loss of connectivity from your client during the failover, typically lasting under a minute, will be the
indication of the failover execution regardless of the service tier.

NOTE
Completion of the failover process (not the actual short unavailability) might take several minutes at a time in case of
high-intensity workloads. This is because the instance engine is taking care of all current transactions on the primary
and catch up on the secondary node, prior to being able to failover.
IMPORTANT
Functional limitations of user-initiated manual failover are:
There could be one (1) failover initiated on the same Managed Instance every 15 minutes .
For BC instances there must exist quorum of replicas for the failover request to be accepted.
For BC instances it is not possible to specify which readable secondary replica to initiate the failover on.
Failover will not be allowed until the first full backup for a new database is completed by automated backup systems.
Failover will not be allowed if there exists a database restore in progress.

Next steps
Learn more about testing your applications for cloud readiness with Testing App Cloud Readiness for Failover
Resiliency with SQL Managed Instance video recoding.
Learn more about high availability of managed instance High availability for Azure SQL Managed Instance.
For an overview, see What is Azure SQL Managed Instance?.
Azure CLI samples for Azure SQL Database and
SQL Managed Instance
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


You can configure Azure SQL Database and SQL Managed Instance by using the Azure CLI.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Samples
Azure SQL Database
Azure SQL Managed Instance

The following table includes links to Azure CLI script examples to manage single and pooled databases in Azure
SQL Database.

A REA DESC RIP T IO N

Create databases

Create a single database Creates an SQL Database and configures a server-level


firewall rule.

Create pooled databases Creates elastic pools, moves pooled databases, and changes
compute sizes.

Scale databases
A REA DESC RIP T IO N

Scale a single database Scales single database.

Scale pooled database Scales a SQL elastic pool to a different compute size.

Configure geo-replication

Single database Configures active geo-replication for a database in Azure


SQL Database and fails it over to the secondary replica.

Pooled database Configures active geo-replication for a database in an elastic


pool, then fails it over to the secondary replica.

Configure failover group

Configure failover group Configures a failover group for a group of databases and
failover over databases to the secondary server.

Single database Creates a database and a failover group, adds the database
to the failover group, then tests failover to the secondary
server.

Pooled database Creates a database, adds it to an elastic pool, adds the


elastic pool to the failover group, then tests failover to the
secondary server.

Back up, restore, copy, and impor t a database

Back up a database Backs up a database in SQL Database to an Azure storage


backup.

Restore a database Restores a database in SQL Database to a specific point in


time.

Copy a database to a new server Creates a copy of an existing database in SQL Database in a
new server.

Import a database from a BACPAC file Imports a database to SQL Database from a BACPAC file.

Learn more about the single-database Azure CLI API.


Create an Azure SQL Managed Instance using the
Azure CLI
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This Azure CLI script example creates an Azure SQL Managed Instance in a dedicated subnet within a new virtual
network. It also configures a route table and a network security group for the virtual network. Once the script
has been successfully run, the managed instance can be accessed from within the virtual network or from an
on-premises environment. See Configure Azure VM to connect to an Azure SQL Managed Instance and
Configure a point-to-site connection to an Azure SQL Managed Instance from on-premises.

IMPORTANT
For limitations, see supported regions and supported subscription types.

If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'


For more information, see set active subscription or log in interactively
Run the script

# Create an Azure SQL Managed Instance

# Variable block
let "randomIdentifier=$RANDOM*$RANDOM"
location="East US"
resourceGroup="msdocs-azuresql-rg-$randomIdentifier"
tag="create-managed-instance"
vNet="msdocs-azuresql-vnet-$randomIdentifier"
subnet="msdocs-azuresql-subnet-$randomIdentifier"
nsg="msdocs-azuresql-nsg-$randomIdentifier"
route="msdocs-azuresql-route-$randomIdentifier"
instance="msdocs-azuresql-instance-$randomIdentifier"
login="azureuser"
password="Pa$$w0rD-$randomIdentifier"

echo "Using resource group $resourceGroup with login: $login, password: $password..."

echo "Creating $resourceGroup in $location..."


az group create --name $resourceGroup --location "$location" --tags $tag

echo "Creating $vNet with $subnet..."


az network vnet create --name $vNet --resource-group $resourceGroup --location "$location" --address-
prefixes 10.0.0.0/16
az network vnet subnet create --name $subnet --resource-group $resourceGroup --vnet-name $vNet --address-
prefixes 10.0.0.0/24 --delegations Microsoft.Sql/managedInstances

echo "Creating $nsg..."


az network nsg create --name $nsg --resource-group $resourceGroup --location "$location"

az network nsg rule create --name "allow_management_inbound" --nsg-name $nsg --priority 100 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges 9000 9003
1438 1440 1452 --direction Inbound --protocol Tcp --source-address-prefixes "*" --source-port-ranges "*"
az network nsg rule create --name "allow_misubnet_inbound" --nsg-name $nsg --priority 200 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*" --
direction Inbound --protocol "*" --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"
az network nsg rule create --name "allow_health_probe_inbound" --nsg-name $nsg --priority 300 --resource-
group $resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*"
--direction Inbound --protocol "*" --source-address-prefixes AzureLoadBalancer --source-port-ranges "*"
az network nsg rule create --name "allow_management_outbound" --nsg-name $nsg --priority 1100 --resource-
group $resourceGroup --access Allow --destination-address-prefixes AzureCloud --destination-port-ranges 443
12000 --direction Outbound --protocol Tcp --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"
az network nsg rule create --name "allow_misubnet_outbound" --nsg-name $nsg --priority 200 --resource-group
$resourceGroup --access Allow --destination-address-prefixes 10.0.0.0/24 --destination-port-ranges "*" --
direction Outbound --protocol "*" --source-address-prefixes 10.0.0.0/24 --source-port-ranges "*"

echo "Creating $route..."


az network route-table create --name $route --resource-group $resourceGroup --location "$location"

az network route-table route create --address-prefix 0.0.0.0/0 --name "primaryToMIManagementService" --next-


hop-type Internet --resource-group $resourceGroup --route-table-name $route
az network route-table route create --address-prefix 10.0.0.0/24 --name "ToLocalClusterNode" --next-hop-type
VnetLocal --resource-group $resourceGroup --route-table-name $route

echo "Configuring $subnet with $nsg and $route..."


az network vnet subnet update --name $subnet --network-security-group $nsg --route-table $route --vnet-name
$vNet --resource-group $resourceGroup

# This step will take awhile to complete. You can monitor deployment progress in the activity log within the
Azure portal.
echo "Creating $instance with $vNet and $subnet..."
az sql mi create --admin-password $password --admin-user $login --name $instance --resource-group
$resourceGroup --subnet $subnet --vnet-name $vNet --location "$location"
Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az network vnet Virtual network commands.

az network vnet subnet Virtual network subnet commands.

az network route-table Network route table commands.

az sql mi SQL Managed Instance commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure CLI script to enable transparent data
encryption using your own key
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This Azure CLI script example configures transparent data encryption (TDE) in Azure SQL Managed Instance,
using a customer-managed key from Azure Key Vault. This is often referred to as a bring-your-own-key (BYOK)
scenario for TDE. To learn more about TDE with customer-managed key, see TDE Bring Your Own Key to Azure
SQL.
This sample requires an existing managed instance, see Use Azure CLI to create an Azure SQL Managed
Instance.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script
# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault

# Run this script after the script in https://docs.microsoft.com/en-us/azure/azure-sql/managed-


instance/scripts/create-configure-managed-instance-cli creates a managed instance.
# You can use the same variables in both scripts/
# If running this script against a different existing instance, uncomment and add appropriate values to next
3 lines of code
# let "randomIdentifier=$RANDOM*$RANDOM"
# instance="<msdocs-azuresql-instance>" # add instance here
# resourceGroup="<msdocs-azuresql-rg>" # add resource here

# Variable block
location="East US"
vault="msdocssqlvault$randomIdentifier"
key="msdocs-azuresql-key-$randomIdentifier"

# echo assigning identity to service principal in the instance


az sql mi update --name $instance --resource-group $resourceGroup --assign-identity

echo "Creating $vault..."


az keyvault create --name $vault --resource-group $resourceGroup --location "$location"

echo "Getting service principal id and setting policy on $vault..."


instanceId=$(az sql mi show --name $instance --resource-group $resourceGroup --query identity.principalId --
output tsv)

echo $instanceId
az keyvault set-policy --name $vault --object-id $instanceId --key-permissions get unwrapKey wrapKey

echo "Creating $key..."


az keyvault key create --name $key --vault-name $vault --size 2048

# keyPath="C:\yourFolder\yourCert.pfx"
# keyPassword="yourPassword"
# az keyvault certificate import --file $keyPath --name $key --vault-name $vault --password $keyPassword

echo "Setting security on $instance with $key..."


keyId=$(az keyvault key show --name $key --vault-name $vault -o json --query key.kid | tr -d '"')

az sql mi key create --kid $keyId --managed-instance $instance --resource-group $resourceGroup


az sql mi tde-key set --server-key-type AzureKeyVault --kid $keyId --managed-instance $instance --resource-
group $resourceGroup

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

C OMMAND DESC RIP T IO N

az sql db Database commands.

az sql failover-group Failover group commands.


Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Restore a Managed Instance database to another
geo-region using the Azure CLI
7/12/2022 • 2 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This Azure CLI script example restores an Azure SQL Managed Instance database from a remote geo-region
(geo-restore) to a point in time.
This sample requires an existing pair of managed instances, see Use Azure CLI to create an Azure SQL Managed
Instance to create a pair of managed instances in different regions.
If you don't have an Azure subscription, create an Azure free account before you begin.

Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.

If you prefer to run CLI reference commands locally, install the Azure CLI. If you are running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For additional sign-in
options, see Sign in with the Azure CLI.
When you're prompted, install Azure CLI extensions on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.

Sample script
For this script, use Azure CLI locally as it takes too long to run in Cloud Shell.
Sign in to Azure
Use the following script to sign in using a specific subscription.

subscription="<subscriptionId>" # add subscription here

az account set -s $subscription # ...or use 'az login'

For more information, see set active subscription or log in interactively


Run the script
# Restore a Managed Instance database to another geo-region
# Use Bash rather than Cloud Shell due to its timeout at 20 minutes when no interactive activity
# In Windows, run Bash in a Docker container to sync time zones between Azure and Bash.

# Run this script after running the script in https://docs.microsoft.com/en-us/azure/azure-sql/managed-


instance/scripts/create-configure-managed-instance-cli twice to create two managed instances
# Provide the values for these three variables before running this rest of this script

# Variable block for additional parameter values


$instance = "<msdocs-azuresql-instance>" # add instance here
$targetInstance = "<msdocs-azuresql-target-instance>" # add target instance here
$resourceGroup = "<msdocs-azuresql-rg>" # add resource here

let "randomIdentifier=$RANDOM*$RANDOM"
$managedDatabase = "managedDatabase-$randomIdentifier"

echo "Creating $($managedDatabase) on $($instance)..."


az sql midb create -g $resourceGroup --mi $instance -n $managedDatabase

# Sleeping commands to wait long enough for automatic backup to be created


echo "Sleeping..."
sleep 40m

# To specify a specific point-in-time (in UTC) to restore from, use the ISO8601 format:
# restorePoint=“2021-07-09T13:10:00Z”
restorePoint=$(date +%s)
restorePoint=$(expr $restorePoint - 60)
restorePoint=$(date -d @$restorePoint +"%Y-%m-%dT%T")
echo $restorePoint

echo "Restoring $($managedDatabase) to $($targetInstance)..."


az sql midb restore -g $resourceGroup --mi $instance -n $managedDatabase --dest-name $targetInstance --time
$restorePoint

Clean up resources
Use the following command to remove the resource group and all resources associated with it using the az
group delete command - unless you have an ongoing need for these resources. Some of these resources may
take a while to create, as well as to delete.

az group delete --name $resourceGroup

Sample reference
This script uses the following commands. Each command in the table links to command specific documentation.

SC RIP T DESC RIP T IO N

az sql midb Managed Instance Database commands.

Next steps
For more information on Azure CLI, see Azure CLI documentation.
Additional SQL Database CLI script samples can be found in the Azure SQL Database documentation.
Azure PowerShell samples for Azure SQL Database
and Azure SQL Managed Instance
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure SQL Database and Azure SQL Managed Instance enable you to configure your databases, instances, and
pools using Azure PowerShell.
If you don't have an Azure subscription, create an Azure free account before you begin.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article, without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux, or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
If you choose to install and use the PowerShell locally, this tutorial requires AZ PowerShell 1.4.0 or later. If you
need to upgrade, see Install Azure PowerShell module. If you are running PowerShell locally, you also need to
run Connect-AzAccount to create a connection with Azure.

Azure SQL Database


Azure SQL Managed Instance

The following table includes links to sample Azure PowerShell scripts for Azure SQL Database.
L IN K DESC RIP T IO N

Create and configure single databases and elastic


pools

Create a single database and configure a server-level firewall This PowerShell script creates a single database and
rule configures a server-level IP firewall rule.

Create elastic pools and move pooled databases This PowerShell script creates elastic pools, moves pooled
databases, and changes compute sizes.

Configure geo-replication and failover

Configure and fail over a single database using active geo- This PowerShell script configures active geo-replication for a
replication single database and fails it over to the secondary replica.

Configure and fail over a pooled database using active geo- This PowerShell script configures active geo-replication for a
replication database in an elastic pool and fails it over to the secondary
replica.

Configure a failover group

Configure a failover group for a single database This PowerShell script creates a database and a failover
group, adds the database to the failover group, and tests
failover to the secondary server.

Configure a failover group for an elastic pool This PowerShell script creates a database, adds it to an elastic
pool, adds the elastic pool to the failover group, and tests
failover to the secondary server.

Scale a single database and an elastic pool

Scale a single database This PowerShell script monitors the performance metrics of a
single database, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.

Scale an elastic pool This PowerShell script monitors the performance metrics of
an elastic pool, scales it to a higher compute size, and
creates an alert rule on one of the performance metrics.

Restore, copy, and impor t a database

Restore a database This PowerShell script restores a database from a geo-


redundant backup and restores a deleted database to the
latest backup.

Copy a database to a new server This PowerShell script creates a copy of an existing database
in a new server.

Import a database from a bacpac file This PowerShell script imports a database into Azure SQL
Database from a bacpac file.

Sync data between databases

Sync data between databases This PowerShell script configures Data Sync to sync between
multiple databases in Azure SQL Database.
L IN K DESC RIP T IO N

Sync data between SQL Database and SQL Server on- This PowerShell script configures Data Sync to sync between
premises a database in Azure SQL Database and a SQL Server on-
premises database.

Update the SQL Data Sync sync schema This PowerShell script adds or removes items from the Data
Sync sync schema.

Learn more about the Single-database Azure PowerShell API.

Additional resources
The examples listed on this page use the PowerShell cmdlets for creating and managing Azure SQL resources.
Additional cmdlets for running queries and performing many database tasks are located in the sqlserver
module. For more information, see SQL Server PowerShell.
PowerShell script to enable transparent data
encryption using your own key
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: Azure SQL Managed Instance


This PowerShell script example configures transparent data encryption (TDE) in Azure SQL Managed Instance,
using a customer-managed key from Azure Key Vault. This is often referred to as a bring-your-own-key (BYOK)
scenario for TDE. To learn more, see Azure SQL Transparent Data Encryption with customer-managed key.

Prerequisites
An existing managed instance. See Use PowerShell to create a managed instance.
If you don't have an Azure subscription, create an Azure free account before you begin.

NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Use Azure Cloud Shell


Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can
use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell
preinstalled commands to run the code in this article, without having to install anything on your local
environment.
To start Azure Cloud Shell:

O P T IO N EXA M P L E/ L IN K

Select Tr y It in the upper-right corner of a code block.


Selecting Tr y It doesn't automatically copy the code to
Cloud Shell.

Go to https://shell.azure.com, or select the Launch Cloud


Shell button to open Cloud Shell in your browser.

Select the Cloud Shell button on the menu bar at the


upper right in the Azure portal.

To run the code in this article in Azure Cloud Shell:


1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux, or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
Using PowerShell locally or using Azure Cloud Shell requires Azure PowerShell 2.3.2 or a later version. If you
need to upgrade, see Install Azure PowerShell module, or run the below sample script to install the module for
the current user:
Install-Module -Name Az -AllowClobber -Scope CurrentUser

If you are running PowerShell locally, you also need to run Connect-AzAccount to create a connection with Azure.

Sample scripts
# You will need an existing Managed Instance as a prerequisite for completing this script.
# See https://docs.microsoft.com/en-us/azure/sql-database/scripts/sql-database-create-configure-managed-
instance-powershell

# Log in to your Azure account:


Connect-AzAccount

# If there are multiple subscriptions, choose the one where AKV is created:
Set-AzContext -SubscriptionId "subscription ID"

# Install the Az.Sql PowerShell package if you are running this PowerShell locally (uncomment below):
# Install-Module -Name Az.Sql

# 1. Create Resource and setup Azure Key Vault (skip if already done)

# Create Resource group (name the resource and specify the location)
$location = "westus2" # specify the location
$resourcegroup = "MyRG" # specify a new RG name
New-AzResourceGroup -Name $resourcegroup -Location $location

# Create new Azure Key Vault with a globally unique VaultName and soft-delete option turned on:
$vaultname = "MyKeyVault" # specify a globally unique VaultName
New-AzKeyVault -VaultName $vaultname -ResourceGroupName $resourcegroup -Location $location

# Authorize Managed Instance to use the AKV (wrap/unwrap key and get public part of key, if public part
exists):
$objectid = (Set-AzSqlInstance -ResourceGroupName $resourcegroup -Name "MyManagedInstance" -
AssignIdentity).Identity.PrincipalId
Set-AzKeyVaultAccessPolicy -BypassObjectIdValidation -VaultName $vaultname -ObjectId $objectid -
PermissionsToKeys get,wrapKey,unwrapKey

# Allow access from trusted Azure services:


Update-AzKeyVaultNetworkRuleSet -VaultName $vaultname -Bypass AzureServices

# Allow access from your client IP address(es) to be able to complete remaining steps:
Update-AzKeyVaultNetworkRuleSet -VaultName $vaultname -IpAddressRange "xxx.xxx.xxx.xxx/xx"

# Turn the network rules ON by setting the default action to Deny:


Update-AzKeyVaultNetworkRuleSet -VaultName $vaultname -DefaultAction Deny

# 2. Provide TDE Protector key (skip if already done)

# First, give yourself necessary permissions on the AKV, (specify your account instead of contoso.com):
Set-AzKeyVaultAccessPolicy -VaultName $vaultname -UserPrincipalName "myaccount@contoso.com" -
PermissionsToKeys create,import,get,list

# The recommended way is to import an existing key from a .pfx file. Replace "<PFX private key password>"
with the actual password below:
$keypath = "c:\some_path\mytdekey.pfx" # Supply your .pfx path and name
$securepfxpwd = ConvertTo-SecureString -String "<PFX private key password>" -AsPlainText -Force
$key = Add-AzKeyVaultKey -VaultName $vaultname -Name "MyTDEKey" -KeyFilePath $keypath -KeyFilePassword
$securepfxpwd
# ...or get an existing key from the vault:
# $key = Get-AzKeyVaultKey -VaultName $vaultname -Name "MyTDEKey"

# Alternatively, generate a new key directly in Azure Key Vault (recommended for test purposes only -
uncomment below):
# $key = Add-AzureKeyVaultKey -VaultName $vaultname -Name MyTDEKey -Destination Software -Size 2048

# 3. Set up BYOK TDE on Managed Instance:

# Assign the key to the Managed Instance:


# $key = 'https://contoso.vault.azure.net/keys/contosokey/01234567890123456789012345678901'
Add-AzSqlInstanceKeyVaultKey -KeyId $key.id -InstanceName "MyManagedInstance" -ResourceGroupName
$resourcegroup

# Set TDE operation mode to BYOK:


Set-AzSqlInstanceTransparentDataEncryptionProtector -Type AzureKeyVault -InstanceName "MyManagedInstance" -
ResourceGroup $resourcegroup -KeyId $key.id

Next steps
For more information on Azure PowerShell, see Azure PowerShell documentation.
Additional PowerShell script samples for SQL Managed Instance can be found in Azure SQL Managed Instance
PowerShell scripts.
Azure Resource Manager templates for Azure SQL
Database & SQL Managed Instance
7/12/2022 • 3 minutes to read • Edit Online

APPLIES TO: Azure SQL Database Azure SQL Managed Instance


Azure Resource Manager templates enable you to define your infrastructure as code and deploy your solutions
to the Azure cloud for Azure SQL Database and Azure SQL Managed Instance.

Azure SQL Database


Azure SQL Managed Instance

The following table includes links to Azure Resource Manager templates for Azure SQL Database.

L IN K DESC RIP T IO N

SQL Database This Azure Resource Manager template creates a single


database in Azure SQL Database and configures server-level
IP firewall rules.

Server This Azure Resource Manager template creates a server for


Azure SQL Database.

Elastic pool This template allows you to deploy an elastic pool and to
assign databases to it.

Failover groups This template creates two servers, a single database, and a
failover group in Azure SQL Database.

Threat Detection This template allows you to deploy a server and a set of
databases with Threat Detection enabled, with an email
address for alerts for each database. Threat Detection is part
of the SQL Advanced Threat Protection (ATP) offering and
provides a layer of security that responds to potential
threats over servers and databases.

Auditing to Azure Blob storage This template allows you to deploy a server with auditing
enabled to write audit logs to a Blob storage. Auditing for
Azure SQL Database tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.

Auditing to Azure Event Hub This template allows you to deploy a server with auditing
enabled to write audit logs to an existing event hub. In order
to send audit events to Event Hubs, set auditing settings
with Enabled State , and set
IsAzureMonitorTargetEnabled as true . Also, configure
Diagnostic Settings with the SQLSecurityAuditEvents log
category on the master database (for server-level
auditing). Auditing tracks database events and writes them
to an audit log that can be placed in your Azure storage
account, OMS workspace, or Event Hubs.
L IN K DESC RIP T IO N

Azure Web App with SQL Database This sample creates a free Azure web app and a database in
Azure SQL Database at the "Basic" service level.

Azure Web App and Redis Cache with SQL Database This template creates a web app, Redis Cache, and database
in the same resource group and creates two connection
strings in the web app for the database and Redis Cache.

Import data from Blob storage using ADF V2 This Azure Resource Manager template creates an instance
of Azure Data Factory V2 that copies data from Azure Blob
storage to SQL Database.

HDInsight cluster with a database This template allows you to create an HDInsight cluster, a
logical SQL server, a database, and two tables. This template
is used by the Use Sqoop with Hadoop in HDInsight article.

Azure Logic App that runs a SQL Stored Procedure on a This template allows you to create a logic app that will run a
schedule SQL stored procedure on schedule. Any arguments for the
procedure can be put into the body section of the template.

Provision server with Azure AD-only authentication enabled This template creates a SQL logical server with an Azure AD
admin set for the server and Azure AD-only authentication
enabled.
Documentation changes for SQL Server on Azure
Virtual Machines
7/12/2022 • 9 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


When you deploy an Azure virtual machine (VM) with SQL Server installed on it, either manually, or through a
built-in image, you can use Azure features to improve your experience. This article summarizes the
documentation changes associated with new features and improvements in the recent releases of SQL Server
on Azure Virtual Machines (VMs). To learn more about SQL Server on Azure VMs, see the overview.

July 2022
C H A N GES DETA IL S

Azure CLI for SQL best practices assessment It's now possible to configure the SQL best practices
assessment feature using the Azure CLI.

May 2022
C H A N GES DETA IL S

SDK-style SQL projects Use Microsoft.Build.Sql for SDK-style SQL projects in the SQL
Database Projects extension in Azure Data Studio or VS
Code. This feature is currently in preview. To learn more, see
SDK-style SQL projects.

April 2022
C H A N GES DETA IL S

Ebdsv5-series The new Ebdsv5-series provides the highest I/O throughput-


to-vCore ratio in Azure along with a memory-to-vCore ratio
of 8. This series offers the best price-performance for SQL
Server workloads on Azure VMs. Consider this series first for
most SQL Server workloads. To learn more, see the updates
in VM sizes.

March 2022
C H A N GES DETA IL S

Security best practices The SQL Server VM security best practices have been
rewritten and refreshed!

January 2022
C H A N GES DETA IL S

Migrate with distributed AG It's now possible to migrate your database(s) from a
standalone instance of SQL Server or an entire availability
group over to SQL Server on Azure VMs using a distributed
availability group! See the prerequisites to get started.

2021
C H A N GES DETA IL S

Deployment configuration improvements It's now possible to configure the following options when
deploying your SQL Server VM from an Azure Marketplace
image: System database location, number of tempdb data
files, collation, max degree of parallelism, min and max server
memory settings, and optimize for ad hoc workloads. Review
Deploy SQL Server VM to learn more.

Automated backup improvements The possible maximum automated backup retention period
has changed from 30 days to 90, and you're now able to
choose a specific container within the storage account.
Review automated backup to learn more.

Tempdb configuration You can now modify tempdb settings directly from the SQL
virtual machines blade in the Azure portal, such as
increasing the size, and adding data files.

Eliminate need for HADR Azure Load Balancer or Deploy your SQL Server VMs to multiple subnets to
DNN eliminate the dependency on the Azure Load Balancer or
distributed network name (DNN) to route traffic to your high
availability / disaster recovery (HADR) solution! See the
multi-subnet availability group tutorial, or prepare SQL
Server VM for FCI article to learn more.

SQL Assessment It's now possible to assess the health of your SQL Server VM
in the Azure portal using SQL Assessment to surface
recommendations that improve performance, and identify
missing best practices configurations. This feature is
currently in preview.

SQL IaaS extension now suppor ts Ubuntu Support has been added to register your SQL Server VM
running on Ubuntu Linux with the SQL Server IaaS Extension
for limited functionality.

SQL IaaS extension full mode no longer requires Restarting the SQL Server service is no longer necessary
restar t when registering your SQL Server VM with the SQL IaaS
Agent extension in full mode!

Repair SQL Ser ver IaaS extension in por tal It's now possible to verify the status of your SQL Server IaaS
Agent extension directly from the Azure portal, and repair it,
if necessary.

Security enhancements in the Azure por tal Once you've enabled Microsoft Defender for SQL, you can
view Security Center recommendations in the SQL virtual
machines resource in the Azure portal.
C H A N GES DETA IL S

HADR content refresh We've refreshed and enhanced our high availability and
disaster recovery (HADR) content! There's now an Overview
of the Windows Server Failover Cluster, as well as a
consolidated how-to configure quorum for SQL Server VMs.
Additionally, we've enhanced the cluster best practices with
more comprehensive setting recommendations adopted to
the cloud.

Migrate high availability to VM Azure Migrate brings support to lift and shift your entire
high availability solution to SQL Server on Azure VMs! Bring
your availability group or your failover cluster instance to
SQL Server VMs using Azure Migrate today!

Performance best practices refresh We've rewritten, refreshed, and updated the performance
best practices documentation, splitting one article into a
series that contain: a checklist, VM size guidance, Storage
guidance, and collecting baseline instructions.

2020
C H A N GES DETA IL S

Azure Government suppor t It's now possible to register SQL Server virtual machines
with the SQL IaaS Agent extension for virtual machines
hosted in the Azure Government cloud.

Azure SQL family SQL Server on Azure Virtual Machines is now a part of the
Azure SQL family of products. Check out our new look!
Nothing has changed in the product, but the documentation
aims to make the Azure SQL product decision easier.

Distributed network name (DNN) SQL Server 2019 on Windows Server 2016+ is now
previewing support for routing traffic to your failover cluster
instance (FCI) by using a distributed network name rather
than using Azure Load Balancer. This support simplifies and
streamlines connecting to your high-availability (HA)
solution in Azure.

FCI with Azure shared disks It's now possible to deploy your failover cluster instance (FCI)
by using Azure shared disks.

Reorganized FCI docs The documentation around failover cluster instances with
SQL Server on Azure VMs has been rewritten and
reorganized for clarity. We've separated some of the
configuration content, like the cluster configuration best
practices, how to prepare a virtual machine for a SQL Server
FCI, and how to configure Azure Load Balancer.

Migrate log to ultra disk Learn how you can migrate your log file to an ultra disk to
leverage high performance and low latency.

Create availability group using Azure PowerShell It's now possible to simplify the creation of an availability
group by using Azure PowerShell as well as the Azure CLI.
C H A N GES DETA IL S

Configure availability group in por tal It's now possible to configure your availability group via the
Azure portal. This feature is currently in preview and being
deployed so if your desired region is unavailable, check back
soon.

Automatic extension registration You can now enable the Automatic registration feature to
automatically register all SQL Server VMs already deployed
to your subscription with the SQL IaaS Agent extension. This
applies to all existing VMs, and will also automatically
register all SQL Server VMs added in the future.

DNN for availability group You can now configure a distributed network name (DNN)
listener) for SQL Server 2019 CU8 and later to replace the
traditional VNN listener, negating the need for an Azure
Load Balancer.

2019
C H A N GES DETA IL S

Free DR replica in Azure You can host a free passive instance for disaster recovery in
Azure for your on-premises SQL Server instance if you have
Software Assurance.

Bulk SQL IaaS extension registration You can now bulk register SQL Server virtual machines with
the SQL IaaS Agent extension.

Performance-optimized storage configuration You can now fully customize your storage configuration
when creating a new SQL Server VM.

Premium file share for FCI You can now create a failover cluster instance by using a
Premium file share instead of the original method of Storage
Spaces Direct.

Azure Dedicated Host You can run your SQL Server VM on Azure Dedicated Host.

SQL Ser ver VM migration to a different region Use Azure Site Recovery to migrate your SQL Server VM
from one region to another.

New SQL IaaS installation modes It's now possible to install the SQL Server IaaS extension in
lightweight mode to avoid restarting the SQL Server service.

SQL Ser ver edition modification You can now change the edition property for your SQL
Server VM.

Changes to the SQL IaaS Agent extension You can register your SQL Server VM with the SQL IaaS
Agent extension by using the new SQL IaaS modes. This
capability includes Windows Server 2008 images.

Bring-your-own-license images using Azure Hybrid Bring-your-own-license images deployed from Azure
Benefit Marketplace can now switch their license type to pay-as-
you-go.
C H A N GES DETA IL S

New SQL Ser ver VM management in the Azure There's now a way to manage your SQL Server VM in the
por tal Azure portal. For more information, see Manage SQL Server
VMs in the Azure portal.

Extended suppor t for SQL Ser ver 2012 Extend support for SQL Server 2012 by migrating as is to an
Azure VM.

Custom image suppor tability You can now install the SQL Server IaaS extension to custom
OS and SQL Server images, which offers the limited
functionality of flexible licensing. When you're registering
your custom image with the SQL IaaS Agent extension,
specify the license type as "AHUB." Otherwise, the
registration will fail.

Named instance suppor tability You can now use the SQL Server IaaS extension with a
named instance, if the default instance has been uninstalled
properly.

Por tal enhancement The Azure portal experience for deploying a SQL Server VM
has been revamped to improve usability. For more
information, see the brief quickstart and more thorough
how-to guide to deploy a SQL Server VM.

Por tal improvement It's now possible to change the licensing model for a SQL
Server VM from pay-as-you-go to bring-your-own-license
by using the Azure portal.

Simplification of availability group deployment to a It's now easier than ever to deploy an availability group to a
SQL Ser ver VM through the Azure CLI SQL Server VM in Azure. You can use the Azure CLI to create
the Windows failover cluster, internal load balancer, and
availability group listeners, all from the command line. For
more information, see [Use the Azure CLI to configure an
Always On availability group for SQL Server on an Azure
VM](./

2018
C H A N GES DETA IL S

New resource provider for a SQL Ser ver cluster A new resource provider
(Microsoft.SqlVirtualMachine/SqlVirtualMachineGroups)
defines the metadata of the Windows failover cluster. Joining
a SQL Server VM to SqlVirtualMachineGroups bootstraps
the Windows Server Failover Cluster (WSFC) service and
joins the VM to the cluster.

Automated setup of an availability group It's now possible to create the Windows failover cluster, join
deployment with Azure quickstar t templates SQL Server VMs to it, create the listener, and configure the
internal load balancer by using two Azure Quickstart
Templates. For more information, see Use Azure Quickstart
Templates to configure an Always On availability group for
SQL Server on an Azure VM.
C H A N GES DETA IL S

Automatic registration to the SQL IaaS Agent SQL Server VMs deployed after this month are automatically
extension registered with the new SQL IaaS Agent extension. SQL
Server VMs deployed before this month still need to be
manually registered. For more information, see Register a
SQL Server virtual machine in Azure with the SQL IaaS
Agent extension.

New SQL IaaS Agent extension A new resource provider (Microsoft.SqlVirtualMachine)


provides better management of your SQL Server VMs. For
more information on registering your VMs, see Register a
SQL Server virtual machine in Azure with the SQL IaaS
Agent extension.

Switch licensing model You can now switch between the pay-per-usage and bring-
your-own-license models for your SQL Server VM by using
the Azure CLI or PowerShell. For more information, see How
to change the licensing model for a SQL Server virtual
machine in Azure.

Additional resources
Windows VMs :
Overview of SQL Server on a Windows VM
Provision SQL Server on a Windows VM
Migrate a database to SQL Server on an Azure VM
High availability and disaster recovery for SQL Server on Azure Virtual Machines
Performance best practices for SQL Server on Azure Virtual Machines
Application patterns and development strategies for SQL Server on Azure Virtual Machines
Linux VMs :
Overview of SQL Server on a Linux VM
Provision SQL Server on a Linux virtual machine
FAQ (Linux)
SQL Server on Linux documentation
What is SQL Server on Windows Azure Virtual
Machines?
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


This article provides an overview of SQL Server on Azure Virtual Machines (VMs) on the Windows platform.
If you're new to SQL Server on Azure VMs, check out the SQL Server on Azure VM Overview video from our in-
depth Azure SQL video series:

Overview
SQL Server on Azure Virtual Machines enables you to use full versions of SQL Server in the cloud without
having to manage any on-premises hardware. SQL Server virtual machines (VMs) also simplify licensing costs
when you pay as you go.
Azure virtual machines run in many different geographic regions around the world. They also offer a variety of
machine sizes. The virtual machine image gallery allows you to create a SQL Server VM with the right version,
edition, and operating system. This makes virtual machines a good option for many different SQL Server
workloads.

Feature benefits
When you register your SQL Server on Azure VM with the SQL IaaS agent extension you unlock a number of
feature benefits. You can register your SQL Server VM in lightweight management mode, which unlocks a few of
the benefits, or in full management mode, which unlocks all available benefits. Registering with the extension is
completely free.
The following table details the benefits unlocked by the extension:

F EAT URE DESC RIP T IO N

Por tal management Unlocks management in the portal, so that you can view all
of your SQL Server VMs in one place, and enable or disable
SQL specific features directly from the portal.
Management mode: Lightweight & full

Automated backup Automates the scheduling of backups for all databases for
either the default instance or a properly installed named
instance of SQL Server on the VM. For more information,
see Automated backup for SQL Server in Azure virtual
machines (Resource Manager).
Management mode: Full
F EAT URE DESC RIP T IO N

Automated patching Configures a maintenance window during which important


Windows and SQL Server security updates to your VM can
take place, so you can avoid updates during peak times for
your workload. For more information, see Automated
patching for SQL Server in Azure virtual machines (Resource
Manager).
Management mode: Full

Azure Key Vault integration Enables you to automatically install and configure Azure Key
Vault on your SQL Server VM. For more information, see
Configure Azure Key Vault integration for SQL Server on
Azure Virtual Machines (Resource Manager).
Management mode: Full

Flexible licensing Save on cost by seamlessly transitioning from the bring-


your-own-license (also known as the Azure Hybrid Benefit)
to the pay-as-you-go licensing model and back again.
Management mode: Lightweight & full

Flexible version / edition If you decide to change the version or edition of SQL Server,
you can update the metadata within the Azure portal
without having to redeploy the entire SQL Server VM.
Management mode: Lightweight & full

Defender for Cloud por tal integration If you've enabled Microsoft Defender for SQL, then you can
view Defender for Cloud recommendations directly in the
SQL virtual machines resource of the Azure portal. See
Security best practices to learn more.
Management mode: Lightweight & full

SQL best practices assessment Enables you to assess the health of your SQL Server VMs
using configuration best practices. For more information, see
SQL best practices assessment.
Management mode: Full

View disk utilization in por tal Allows you to view a graphical representation of the disk
utilization of your SQL data files in the Azure portal.
Management mode: Full

Getting started
To get started with SQL Server on Azure VMs, review the following resources:
Create SQL VM : To create your SQL Server on Azure VM, review the Quickstarts using the Azure portal,
Azure PowerShell or an ARM template. For more thorough guidance, review the Provisioning guide.
Connect to SQL VM : To connect to your SQL Server on Azure VMs, review the ways to connect.
Migrate data : Migrate your data to SQL Server on Azure VMs from SQL Server, Oracle, or Db2.
Storage configuration : For information about configuring storage for your SQL Server on Azure VMs,
review Storage configuration.
Performance : Fine-tune the performance of your SQL Server on Azure VM by reviewing the Performance
best practices checklist.
Pricing : For information about the pricing structure of your SQL Server on Azure VM, review the Pricing
guidance.
Frequently asked questions : For commonly asked questions, and scenarios, review the FAQ.
Videos
For videos about the latest features to optimize SQL Server VM performance and automate management,
review the following Data Exposed videos:
Caching and Storage Capping (Ep. 1)
Automate Management with the SQL Server IaaS Agent extension (Ep. 2)
Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)
Get the best price-performance for your SQL Server workloads on Azure VM
Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)
Best Price-Performance with Ebdsv5 Series (Ep.6)
Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment (Ep. 7)
New and Improved SQL Server on Azure VM deployment and management experience (Ep.8)

High availability & disaster recovery


On top of the built-in high availability provided by Azure virtual machines, you can also leverage the high
availability and disaster recovery features provided by SQL Server.
To learn more, see the overview of Always On availability groups, and Always On failover cluster instances. For
more details, see the business continuity overview.
To get started, see the tutorials for availability groups or preparing your VM for a failover cluster instance.

Licensing
To get started, choose a SQL Server virtual machine image with your required version, edition, and operating
system. The following sections provide direct links to the Azure portal for the SQL Server virtual machine
gallery images.
Azure only maintains one virtual machine image for each supported operating system, version, and edition
combination. This means that over time images are refreshed, and older images are removed. For more
information, see the Images section of the SQL Server VMs FAQ.

TIP
For more information about how to understand pricing for SQL Server images, see Pricing guidance for SQL Server on
Azure Virtual Machines.

Pay as you go
The following table provides a matrix of pay-as-you-go SQL Server images.

VERSIO N O P ERAT IN G SY ST EM EDIT IO N

SQL Ser ver 2019 Windows Server 2019 Enterprise, Standard, Web, Developer

SQL Ser ver 2017 Windows Server 2016 Enterprise, Standard, Web, Express,
Developer

SQL Ser ver 2016 SP2 Windows Server 2016 Enterprise, Standard, Web, Express,
Developer

SQL Ser ver 2014 SP2 Windows Server 2012 R2 Enterprise, Standard, Web, Express
VERSIO N O P ERAT IN G SY ST EM EDIT IO N

SQL Ser ver 2012 SP4 Windows Server 2012 R2 Enterprise, Standard, Web, Express

SQL Ser ver 2008 R2 SP3 Windows Server 2008 R2 Enterprise, Standard, Web, Express

To see the available SQL Server on Linux virtual machine images, see Overview of SQL Server on Azure Virtual
Machines (Linux).

NOTE
Change the licensing model of a pay-per-usage SQL Server VM to use your own license. For more information, see How
to change the licensing model for a SQL Server VM.

Bring your own license


You can also bring your own license (BYOL). In this scenario, you only pay for the VM without any additional
charges for SQL Server licensing. Bringing your own license can save you money over time for continuous
production workloads. For requirements to use this option, see Pricing guidance for SQL Server Azure VMs.
To bring your own license, you can either convert an existing pay-per-usage SQL Server VM, or you can deploy
an image with the prefixed {BYOL} . For more information about switching your licensing model between pay-
per-usage and BYOL, see How to change the licensing model for a SQL Server VM.

VERSIO N O P ERAT IN G SY ST EM EDIT IO N

SQL Ser ver 2019 Windows Server 2019 Enterprise BYOL, Standard BYOL

SQL Ser ver 2017 Windows Server 2016 Enterprise BYOL, Standard BYOL

SQL Ser ver 2016 SP2 Windows Server 2016 Enterprise BYOL, Standard BYOL

SQL Ser ver 2014 SP2 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL

SQL Ser ver 2012 SP4 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL

It is possible to deploy an older image of SQL Server that is not available in the Azure portal using PowerShell.
To view all available images using PowerShell, use the following command:

Get-AzVMImageOffer -Location $Location -Publisher 'MicrosoftSQLServer'

For more information about deploying SQL Server VMs using PowerShell, view How to provision SQL Server
virtual machines with Azure PowerShell.

Customer experience improvement program (CEIP)


The Customer Experience Improvement Program (CEIP) is enabled by default. This periodically sends reports to
Microsoft to help improve SQL Server. There is no management task required with CEIP unless you want to
disable it after provisioning. You can customize or disable the CEIP by connecting to the VM with remote
desktop. Then run the SQL Ser ver Error and Usage Repor ting utility. Follow the instructions to disable
reporting. For more information about data collection, see the SQL Server Privacy Statement.

Related products and services


Since SQL Server on Azure VMs is integrated into the Azure platform, review resources from related products
and services that interact with the SQL Server on Azure VM ecosystem:
Windows vir tual machines : Azure Virtual Machines overview
Storage : Introduction to Microsoft Azure Storage
Networking : Virtual Network overview, IP addresses in Azure, Create a Fully Qualified Domain Name in the
Azure portal
SQL : SQL Server documentation, Azure SQL Database comparison

Next steps
Get started with SQL Server on Azure Virtual Machines:
Create a SQL Server VM in the Azure portal
Get answers to commonly asked questions about SQL Server VMs:
SQL Server on Azure Virtual Machines FAQ
View Reference Architectures for running N-tier applications on SQL Server in IaaS
Windows N-tier application on Azure with SQL Server
Run an N-tier application in multiple Azure regions for high availability
Automate management with the Windows SQL
Server IaaS Agent extension
7/12/2022 • 8 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


The SQL Server IaaS Agent extension (SqlIaasExtension) runs on SQL Server on Windows Azure Virtual
Machines (VMs) to automate management and administration tasks.
This article provides an overview of the extension. To install the SQL Server IaaS extension to SQL Server on
Azure VMs, see the articles for Automatic installation, Single VMs, or VMs in bulk.

NOTE
Starting in September 2021, registering with the SQL IaaS extension in full mode no longer requires restarting the SQL
Server service.

To learn more about the Azure VM deployment and management experience, including recent improvements,
see:
Azure SQL VM: Automate Management with the SQL Server IaaS Agent extension (Ep. 2)
Azure SQL VM: New and Improved SQL on Azure VM deployment and management experience (Ep.8) | Data
Exposed.

Overview
The SQL Server IaaS Agent extension allows for integration with the Azure portal, and depending on the
management mode, unlocks a number of feature benefits for SQL Server on Azure VMs:
Feature benefits : The extension unlocks a number of automation feature benefits, such as portal
management, license flexibility, automated backup, automated patching and more. See Feature benefits
later in this article for details.
Compliance : The extension offers a simplified method to fulfill the requirement of notifying Microsoft
that the Azure Hybrid Benefit has been enabled as is specified in the product terms. This process negates
needing to manage licensing registration forms for each resource.
Free : The extension in all three manageability modes is completely free. There is no additional cost
associated with the extension, or with changing management modes.
Simplified license management : The extension simplifies SQL Server license management, and
allows you to quickly identify SQL Server VMs with the Azure Hybrid Benefit enabled using the Azure
portal, PowerShell or the Azure CLI:
PowerShell
Azure CLI

Get-AzSqlVM | Where-Object {$_.LicenseType -eq 'AHUB'}


Feature benefits
The SQL Server IaaS Agent extension unlocks a number of feature benefits for managing your SQL Server VM.
You can register your SQL Server VM in lightweight management mode, which unlocks a few of the benefits, or
in full management mode, which unlocks all available benefits.
The following table details these benefits:

F EAT URE DESC RIP T IO N

Por tal management Unlocks management in the portal, so that you can view all
of your SQL Server VMs in one place, and enable or disable
SQL specific features directly from the portal.
Management mode: Lightweight & full

Automated backup Automates the scheduling of backups for all databases for
either the default instance or a properly installed named
instance of SQL Server on the VM. For more information,
see Automated backup for SQL Server in Azure virtual
machines (Resource Manager).
Management mode: Full

Automated patching Configures a maintenance window during which important


Windows and SQL Server security updates to your VM can
take place, so you can avoid updates during peak times for
your workload. For more information, see Automated
patching for SQL Server in Azure virtual machines (Resource
Manager).
Management mode: Full

Azure Key Vault integration Enables you to automatically install and configure Azure Key
Vault on your SQL Server VM. For more information, see
Configure Azure Key Vault integration for SQL Server on
Azure Virtual Machines (Resource Manager).
Management mode: Full

Flexible licensing Save on cost by seamlessly transitioning from the bring-


your-own-license (also known as the Azure Hybrid Benefit)
to the pay-as-you-go licensing model and back again.
Management mode: Lightweight & full

Flexible version / edition If you decide to change the version or edition of SQL Server,
you can update the metadata within the Azure portal
without having to redeploy the entire SQL Server VM.
Management mode: Lightweight & full

Defender for Cloud por tal integration If you've enabled Microsoft Defender for SQL, then you can
view Defender for Cloud recommendations directly in the
SQL virtual machines resource of the Azure portal. See
Security best practices to learn more.
Management mode: Lightweight & full

SQL best practices assessment Enables you to assess the health of your SQL Server VMs
using configuration best practices. For more information, see
SQL best practices assessment.
Management mode: Full
F EAT URE DESC RIP T IO N

View disk utilization in por tal Allows you to view a graphical representation of the disk
utilization of your SQL data files in the Azure portal.
Management mode: Full

Management modes
You can choose to register your SQL IaaS extension in three management modes:
Lightweight mode copies extension binaries to the VM, but does not install the agent. Lightweight mode
only supports changing the license type and edition of SQL Server and provides limited portal
management. Use this option for SQL Server VMs with multiple instances, or those participating in a
failover cluster instance (FCI). Lightweight mode is the default management mode when using the
automatic registration feature, or when a management type is not specified during manual registration.
There is no impact to memory or CPU when using the lightweight mode, and there is no associated cost.
Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL
Server VM with a single instance. Full mode installs two Windows services that have a minimal impact to
memory and CPU - these can be monitored through task manager. There is no cost associated with using
the full manageability mode. System administrator permissions are required. As of September 2021,
restarting the SQL Server service is no longer necessary when registering your SQL Server VM in full
management mode.
NoAgent mode is dedicated to SQL Server 2008 and SQL Server 2008 R2 installed on Windows Server
2008. There is no impact to memory or CPU when using the NoAgent mode. There is no cost associated
with using the NoAgent manageability mode, the SQL Server is not restarted, and an agent is not
installed to the VM.
You can view the current mode of your SQL Server IaaS agent by using Azure PowerShell:

# Get the SqlVirtualMachine


$sqlvm = Get-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName
$sqlvm.SqlManagementType

Installation
Register your SQL Server VM with the SQL Server IaaS Agent extension to create the SQL vir tual machine
resource within your subscription, which is a separate resource from the virtual machine resource.
Unregistering your SQL Server VM from the extension will remove the SQL vir tual machine resource from
your subscription but will not drop the actual virtual machine.
Deploying a SQL Server VM Azure Marketplace image through the Azure portal automatically registers the SQL
Server VM with the extension in full. However, if you choose to self-install SQL Server on an Azure virtual
machine, or provision an Azure virtual machine from a custom VHD, then you must register your SQL Server
VM with the SQL IaaS extension to unlock feature benefits.
Registering the extension in lightweight mode copies binaries but does not install the agent to the VM. The
agent is installed to the VM when the extension is installed in full management mode.
There are three ways to register with the extension:
Automatically for all current and future VMs in a subscription
Manually for a single VM
Manually for multiple VMs in bulk
By default, Azure VMs with SQL Server 2016 or later installed will be automatically registered with the SQL IaaS
Agent extension when detected by the CEIP service. See the SQL Server privacy supplement for more
information.
Named instance support
The SQL Server IaaS Agent extension works with a named instance of SQL Server if it is the only SQL Server
instance available on the virtual machine. If a VM has multiple named SQL Server instances and no default
instance, then the SQL IaaS extension will register in lightweight mode and pick either the instance with the
highest edition, or the first instance, if all the instances have the same edition.
To use a named instance of SQL Server, deploy an Azure virtual machine, install a single named SQL Server
instance to it, and then register it with the SQL IaaS Extension.
Alternatively, to use a named instance with an Azure Marketplace SQL Server image, follow these steps:
1. Deploy a SQL Server VM from Azure Marketplace.
2. Unregister the SQL Server VM from the SQL IaaS Agent extension.
3. Uninstall SQL Server completely within the SQL Server VM.
4. Install SQL Server with a named instance within the SQL Server VM.
5. Register the VM with the SQL IaaS Agent Extension.

Verify status of extension


Use the Azure portal or Azure PowerShell to check the status of the extension.
Azure portal
Verify the extension is installed in the Azure portal.
Go to your Vir tual machine resource in the Azure portal (not the SQL virtual machines resource, but the
resource for your VM). Select Extensions under Settings . You should see the SqlIaasExtension extension
listed, as in the following example:

Azure PowerShell
You can also use the Get-AzVMSqlSer verExtension Azure PowerShell cmdlet:

Get-AzVMSqlServerExtension -VMName "vmname" -ResourceGroupName "resourcegroupname"

The previous command confirms that the agent is installed and provides general status information. You can get
specific status information about automated backup and patching by using the following commands:

$sqlext = Get-AzVMSqlServerExtension -VMName "vmname" -ResourceGroupName "resourcegroupname"


$sqlext.AutoPatchingSettings
$sqlext.AutoBackupSettings

Limitations
The SQL IaaS Agent extension only supports:
SQL Server VMs deployed through the Azure Resource Manager. SQL Server VMs deployed through the
classic model are not supported.
SQL Server VMs deployed to the public or Azure Government cloud. Deployments to other private or
government clouds are not supported.
Failover cluster instances (FCIs) in lightweight mode.
Named instances with multiple instances on a single VM in lightweight mode.

Privacy statement
When using SQL Server on Azure VMs and the SQL IaaS extension, consider the following privacy statements:
Data collection : The SQL IaaS Agent extension collects data for the express purpose of giving customers
optional benefits when using SQL Server on Azure Virtual Machines. Microsoft will not use this data
for licensing audits without the customer's advance consent.See the SQL Server privacy supplement
for more information.
In-region data residency : SQL Server on Azure VMs and SQL IaaS Agent Extension do not move or
store customer data out of the region in which the VMs are deployed.

Next steps
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the articles for Automatic installation,
Single VMs, or VMs in bulk.
For more information about running SQL Server on Azure Virtual Machines, see the What is SQL Server on
Azure Virtual Machines?.
To learn more, see frequently asked questions.
Quickstart: Create SQL Server on a Windows virtual
machine in the Azure portal
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


This quickstart steps through creating a SQL Server virtual machine (VM) in the Azure portal.

TIP
This quickstart provides a path for quickly provisioning and connecting to a SQL VM. For more information about
other SQL VM provisioning choices, see the Provisioning guide for SQL Server on Windows VM in the Azure portal.
If you have questions about SQL Server virtual machines, see the Frequently Asked Questions.

Get an Azure subscription


If you don't have an Azure subscription, create a free account before you begin.

Select a SQL Server VM image


1. Sign in to the Azure portal using your account.
2. Select Azure SQL in the left-hand menu of the Azure portal. If Azure SQL is not in the list, select All
ser vices , then type Azure SQL in the search box.
3. Select +Add to open the Select SQL deployment option page. You can view additional information by
selecting Show details on the SQL vir tual machines tile.
4. Select one of the versions labelled Free SQL Ser ver License... from the dropdown.
5. Select Create .

Provide basic details


On the Basics tab, provide the following information:
1. In the Project Details section, select your Azure subscription and then select Create new to create a
new resource group. Type SQLVM-RG for the name.

2. Under Instance details :


a. Type SQLVM for the Vir tual machine name .
b. Choose a location for your Region .
c. For the purpose of this quickstart, leave Availability options set to No infrastructure redundancy
required. To find out more information about availability options, see Availability.
d. In the Image list, select the image with the version of SQL Server and operating system you want. For
example, you can use an image with a label that begins with Free SQL Server License:.
e. Choose to Change size for the Size of the virtual machine and select the A2 Basic offering. Be sure
to clean up your resources once you're done with them to prevent any unexpected charges.
3. Under Administrator account , provide a username, such as azureuser and a password. The password
must be at least 12 characters long and meet the defined complexity requirements.

4. Under Inbound por t rules , choose Allow selected por ts and then select RDP (3389) from the drop-
down.

SQL Server settings


On the SQL Ser ver settings tab, configure the following options:
1. Under Security & Networking , select Public (Internet) for SQL Connectivity and change the port to
1401 to avoid using a well-known port number in the public scenario.

2. Under SQL Authentication , select Enable . The SQL login credentials are set to the same user name and
password that you configured for the VM. Use the default setting for Azure Key Vault integration .
Storage configuration is not available for the basic SQL Server VM image, but you can find more
information about available options for other images at storage configuration.
3. Change any other settings if needed, and then select Review + create .

Create the SQL Server VM


On the Review + create tab, review the summary, and select Create to create SQL Server, resource group, and
resources specified for this VM.
You can monitor the deployment from the Azure portal. The Notifications button at the top of the screen
shows basic status of the deployment. Deployment can take several minutes.

Connect to SQL Server


1. In the portal, find the Public IP address of your SQL Server VM in the Over view section of your virtual
machine's properties.
2. On a different computer connected to the Internet, open SQL Server Management Studio (SSMS).
3. In the Connect to Ser ver or Connect to Database Engine dialog box, edit the Ser ver name value.
Enter your VM's public IP address. Then add a comma and add the custom port (1401 ) that you specified
when you configured the new VM. For example, 11.22.33.444,1401 .
4. In the Authentication box, select SQL Ser ver Authentication .
5. In the Login box, type the name of a valid SQL login.
6. In the Password box, type the password of the login.
7. Select Connect .

Log in to the VM remotely


Use the following steps to connect to the SQL Server virtual machine with Remote Desktop:
1. After the Azure virtual machine is created and running, click the Virtual Machines icon in the Azure portal
to view your VMs.
2. Click the ellipsis, ..., for your new VM.
3. Click Connect .
4. Open the RDP file that your browser downloads for the VM.
5. The Remote Desktop Connection notifies you that the publisher of this remote connection cannot be
identified. Click Connect to continue.
6. In the Windows Security dialog, click Use a different account . You might have to click More choices
to see this. Specify the user name and password that you configured when you created the VM. You must
add a backslash before the user name.

7. Click OK to connect.
After you connect to the SQL Server virtual machine, you can launch SQL Server Management Studio and
connect with Windows Authentication using your local administrator credentials. If you enabled SQL Server
Authentication, you can also connect with SQL Authentication using the SQL login and password you configured
during provisioning.
Access to the machine enables you to directly change machine and SQL Server settings based on your
requirements. For example, you could configure the firewall settings or change SQL Server configuration
settings.
Clean up resources
If you do not need your SQL VM to run continually, you can avoid unnecessary charges by stopping it when not
in use. You can also permanently delete all resources associated with the virtual machine by deleting its
associated resource group in the portal. This permanently deletes the virtual machine as well, so use this
command with care. For more information, see Manage Azure resources through portal.

Next steps
In this quickstart, you created a SQL Server virtual machine in the Azure portal. To learn more about how to
migrate your data to the new SQL Server, see the following article.
Migrate a database to a SQL VM
Quickstart: Create SQL Server on a Windows virtual
machine with Azure PowerShell
7/12/2022 • 4 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


This quickstart steps through creating a SQL Server virtual machine (VM) with Azure PowerShell.

TIP
This quickstart provides a path for quickly provisioning and connecting to a SQL VM. For more information about
other Azure PowerShell options for creating SQL VMs, see the Provisioning guide for SQL Server VMs with Azure
PowerShell.
If you have questions about SQL Server virtual machines, see the Frequently Asked Questions.

Get an Azure subscription


If you don't have an Azure subscription, create a free account before you begin.

Get Azure PowerShell


NOTE
This article uses the Azure Az PowerShell module, which is the recommended PowerShell module for interacting with
Azure. To get started with the Az PowerShell module, see Install Azure PowerShell. To learn how to migrate to the Az
PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Configure PowerShell
1. Open PowerShell and establish access to your Azure account by running the Connect-AzAccount
command.

Connect-AzAccount

2. When you see the sign-in window, enter your credentials. Use the same email and password that you use
to sign in to the Azure portal.

Create a resource group


1. Define a variable with a unique resource group name. To simplify the rest of the quickstart, the remaining
commands use this name as a basis for other resource names.

$ResourceGroupName = "sqlvm1"

2. Define a location of a target Azure region for all VM resources.


$Location = "East US"

3. Create the resource group.

New-AzResourceGroup -Name $ResourceGroupName -Location $Location

Configure network settings


1. Create a virtual network, subnet, and a public IP address. These resources are used to provide network
connectivity to the virtual machine and connect it to the internet.

$SubnetName = $ResourceGroupName + "subnet"


$VnetName = $ResourceGroupName + "vnet"
$PipName = $ResourceGroupName + $(Get-Random)

# Create a subnet configuration


$SubnetConfig = New-AzVirtualNetworkSubnetConfig -Name $SubnetName -AddressPrefix 192.168.1.0/24

# Create a virtual network


$Vnet = New-AzVirtualNetwork -ResourceGroupName $ResourceGroupName -Location $Location `
-Name $VnetName -AddressPrefix 192.168.0.0/16 -Subnet $SubnetConfig

# Create a public IP address and specify a DNS name


$Pip = New-AzPublicIpAddress -ResourceGroupName $ResourceGroupName -Location $Location `
-AllocationMethod Static -IdleTimeoutInMinutes 4 -Name $PipName

2. Create a network security group. Configure rules to allow remote desktop (RDP) and SQL Server
connections.

# Rule to allow remote desktop (RDP)


$NsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name "RDPRule" -Protocol Tcp `
-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * `
-DestinationAddressPrefix * -DestinationPortRange 3389 -Access Allow

#Rule to allow SQL Server connections on port 1433


$NsgRuleSQL = New-AzNetworkSecurityRuleConfig -Name "MSSQLRule" -Protocol Tcp `
-Direction Inbound -Priority 1001 -SourceAddressPrefix * -SourcePortRange * `
-DestinationAddressPrefix * -DestinationPortRange 1433 -Access Allow

# Create the network security group


$NsgName = $ResourceGroupName + "nsg"
$Nsg = New-AzNetworkSecurityGroup -ResourceGroupName $ResourceGroupName `
-Location $Location -Name $NsgName `
-SecurityRules $NsgRuleRDP,$NsgRuleSQL

3. Create the network interface.

$InterfaceName = $ResourceGroupName + "int"


$Interface = New-AzNetworkInterface -Name $InterfaceName `
-ResourceGroupName $ResourceGroupName -Location $Location `
-SubnetId $VNet.Subnets[0].Id -PublicIpAddressId $Pip.Id `
-NetworkSecurityGroupId $Nsg.Id

Create the SQL VM


1. Define your credentials to sign in to the VM. The username is "azureadmin". Make sure you change
<password> before running the command.

# Define a credential object


$SecurePassword = ConvertTo-SecureString '<password>' `
-AsPlainText -Force
$Cred = New-Object System.Management.Automation.PSCredential ("azureadmin", $securePassword)

2. Create a virtual machine configuration object and then create the VM. The following command creates a
SQL Server 2017 Developer Edition VM on Windows Server 2016.

# Create a virtual machine configuration


$VMName = $ResourceGroupName + "VM"
$VMConfig = New-AzVMConfig -VMName $VMName -VMSize Standard_DS13_V2 |
Set-AzVMOperatingSystem -Windows -ComputerName $VMName -Credential $Cred -ProvisionVMAgent -
EnableAutoUpdate |
Set-AzVMSourceImage -PublisherName "MicrosoftSQLServer" -Offer "SQL2017-WS2016" -Skus "SQLDEV" -
Version "latest" |
Add-AzVMNetworkInterface -Id $Interface.Id

# Create the VM
New-AzVM -ResourceGroupName $ResourceGroupName -Location $Location -VM $VMConfig

TIP
It takes several minutes to create the VM.

Register with SQL VM RP


To get portal integration and SQL VM features, you must register with the SQL IaaS Agent extension.
To get full functionality, you need to register with the extension in full mode. Otherwise, register in lightweight
mode.

Remote desktop into the VM


1. Use the following command to retrieve the public IP address for the new VM.

Get-AzPublicIpAddress -ResourceGroupName $ResourceGroupName | Select IpAddress

2. Pass the returned IP address as a command-line parameter to mstsc to start a Remote Desktop session
into the new VM.

mstsc /v:<publicIpAddress>

3. When prompted for credentials, choose to enter credentials for a different account. Enter the username
with a preceding backslash (for example, \azureadmin ), and the password that you set previously in this
quickstart.

Connect to SQL Server


1. After signing in to the Remote Desktop session, launch SQL Ser ver Management Studio 2017 from
the start menu.
2. In the Connect to Ser ver dialog box, keep the defaults. The server name is the name of the VM.
Authentication is set to Windows Authentication . Select Connect .
You're now connected to SQL Server locally. If you want to connect remotely, you must configure connectivity
from the Azure portal or manually.

Clean up resources
If you don't need the VM to run continuously, you can avoid unnecessary charges by stopping it when not in
use. The following command stops the VM but leaves it available for future use.

Stop-AzVM -Name $VMName -ResourceGroupName $ResourceGroupName

You can also permanently delete all resources associated with the virtual machine with the Remove-
AzResourceGroup command. Doing so permanently deletes the virtual machine as well, so use this command
with care.

Next steps
In this quickstart, you created a SQL Server 2017 virtual machine using Azure PowerShell. To learn more about
how to migrate your data to the new SQL Server, see the following article.
Migrate a database to a SQL VM
Quickstart: Create SQL Server VM using Bicep
7/12/2022 • 4 minutes to read • Edit Online

This quickstart shows you how to use Bicep to create an SQL Server on Azure Virtual Machine (VM).
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides
concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for
your infrastructure-as-code solutions in Azure.

Prerequisites
The SQL Server VM Bicep file requires the following:
The latest version of the Azure CLI and/or PowerShell.
A pre-configured resource group with a prepared virtual network and subnet.
An Azure subscription. If you don't have one, create a free account before you begin.

Review the Bicep file


The Bicep file used in this quickstart is from Azure Quickstart Templates.

@description('The name of the VM')


param virtualMachineName string = 'myVM'

@description('The virtual machine size.')


param virtualMachineSize string = 'Standard_D8s_v3'

@description('Specify the name of an existing VNet in the same resource group')


param existingVirtualNetworkName string

@description('Specify the resrouce group of the existing VNet')


param existingVnetResourceGroup string = resourceGroup().name

@description('Specify the name of the Subnet Name')


param existingSubnetName string

@description('Windows Server and SQL Offer')


@allowed([
'sql2019-ws2019'
'sql2017-ws2019'
'SQL2017-WS2016'
'SQL2016SP1-WS2016'
'SQL2016SP2-WS2016'
'SQL2014SP3-WS2012R2'
'SQL2014SP2-WS2012R2'
])
param imageOffer string = 'sql2019-ws2019'

@description('SQL Server Sku')


@allowed([
'Standard'
'Enterprise'
'SQLDEV'
'Web'
'Express'
])
param sqlSku string = 'Standard'

@description('The admin user name of the VM')


@description('The admin user name of the VM')
param adminUsername string

@description('The admin password of the VM')


@secure()
param adminPassword string

@description('SQL Server Workload Type')


@allowed([
'General'
'OLTP'
'DW'
])
param storageWorkloadType string = 'General'

@description('Amount of data disks (1TB each) for SQL Data files')


@minValue(1)
@maxValue(8)
param sqlDataDisksCount int = 1

@description('Path for SQL Data files. Please choose drive letter from F to Z, and other drives from A to E
are reserved for system')
param dataPath string = 'F:\\SQLData'

@description('Amount of data disks (1TB each) for SQL Log files')


@minValue(1)
@maxValue(8)
param sqlLogDisksCount int = 1

@description('Path for SQL Log files. Please choose drive letter from F to Z and different than the one used
for SQL data. Drive letter from A to E are reserved for system')
param logPath string = 'G:\\SQLLog'

@description('Location for all resources.')


param location string = resourceGroup().location

var networkInterfaceName = '${virtualMachineName}-nic'


var networkSecurityGroupName = '${virtualMachineName}-nsg'
var networkSecurityGroupRules = [
{
name: 'RDP'
properties: {
priority: 300
protocol: 'Tcp'
access: 'Allow'
direction: 'Inbound'
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '3389'
}
}
]
var publicIpAddressName = '${virtualMachineName}-publicip-${uniqueString(virtualMachineName)}'
var publicIpAddressType = 'Dynamic'
var publicIpAddressSku = 'Basic'
var diskConfigurationType = 'NEW'
var nsgId = networkSecurityGroup.id
var subnetRef = resourceId(existingVnetResourceGroup, 'Microsoft.Network/virtualNetWorks/subnets',
existingVirtualNetworkName, existingSubnetName)
var dataDisksLuns = array(range(0, sqlDataDisksCount))
var logDisksLuns = array(range(sqlDataDisksCount, sqlLogDisksCount))
var dataDisks = {
createOption: 'Empty'
caching: 'ReadOnly'
writeAcceleratorEnabled: false
storageAccountType: 'Premium_LRS'
diskSizeGB: 1023
}
var tempDbPath = 'D:\\SQLTemp'
var tempDbPath = 'D:\\SQLTemp'

resource publicIpAddress 'Microsoft.Network/publicIPAddresses@2021-08-01' = {


name: publicIpAddressName
location: location
sku: {
name: publicIpAddressSku
}
properties: {
publicIPAllocationMethod: publicIpAddressType
}
}

resource networkSecurityGroup 'Microsoft.Network/networkSecurityGroups@2021-08-01' = {


name: networkSecurityGroupName
location: location
properties: {
securityRules: networkSecurityGroupRules
}
}

resource networkInterface 'Microsoft.Network/networkInterfaces@2021-08-01' = {


name: networkInterfaceName
location: location
properties: {
ipConfigurations: [
{
name: 'ipconfig1'
properties: {
subnet: {
id: subnetRef
}
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIpAddress.id
}
}
}
]
enableAcceleratedNetworking: true
networkSecurityGroup: {
id: nsgId
}
}
}

resource virtualMachine 'Microsoft.Compute/virtualMachines@2021-11-01' = {


name: virtualMachineName
location: location
properties: {
hardwareProfile: {
vmSize: virtualMachineSize
}
storageProfile: {
osDisk: {
createOption: 'FromImage'
managedDisk: {
storageAccountType: 'Premium_LRS'
}
}
imageReference: {
publisher: 'MicrosoftSQLServer'
offer: imageOffer
sku: sqlSku
version: 'latest'
}
dataDisks: [for j in range(0, (sqlDataDisksCount + sqlLogDisksCount)): {
lun: j
createOption: dataDisks.createOption
caching: ((j >= sqlDataDisksCount) ? 'None' : dataDisks.caching)
caching: ((j >= sqlDataDisksCount) ? 'None' : dataDisks.caching)
writeAcceleratorEnabled: dataDisks.writeAcceleratorEnabled
diskSizeGB: dataDisks.diskSizeGB
managedDisk: {
storageAccountType: dataDisks.storageAccountType
}
}]
}
networkProfile: {
networkInterfaces: [
{
id: networkInterface.id
}
]
}
osProfile: {
computerName: virtualMachineName
adminUsername: adminUsername
adminPassword: adminPassword
windowsConfiguration: {
enableAutomaticUpdates: true
provisionVMAgent: true
}
}
}
}

resource sqlVirtualMachine 'Microsoft.SqlVirtualMachine/sqlVirtualMachines@2021-11-01-preview' = {


name: virtualMachineName
location: location
properties: {
virtualMachineResourceId: virtualMachine.id
sqlManagement: 'Full'
sqlServerLicenseType: 'PAYG'
storageConfigurationSettings: {
diskConfigurationType: diskConfigurationType
storageWorkloadType: storageWorkloadType
sqlDataSettings: {
luns: dataDisksLuns
defaultFilePath: dataPath
}
sqlLogSettings: {
luns: logDisksLuns
defaultFilePath: logPath
}
sqlTempDbSettings: {
defaultFilePath: tempDbPath
}
}
}
}

output adminUsername string = adminUsername

Five Azure resources are defined in the Bicep file:


Microsoft.Network/publicIpAddresses: Creates a public IP address.
Microsoft.Network/networkSecurityGroups: Creates a network security group.
Microsoft.Network/networkInterfaces: Configures the network interface.
Microsoft.Compute/virtualMachines: Creates a virtual machine in Azure.
Microsoft.SqlVirtualMachine/SqlVirtualMachines: registers the virtual machine with the SQL IaaS Agent
extension.

Deploy the Bicep file


1. Save the Bicep file as main.bicep to your local computer.
2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.

CLI
PowerShell

az deployment group create --resource-group exampleRG --template-file main.bicep --parameters


existingSubnetName=<subnet-name> adminUsername=<admin-user> adminPassword=<admin-pass>

Make sure to replace the resource group name, exampleRG, with the name of your pre-configured resource
group.
You're required to enter the following parameters:
existingSubnetName : Replace <subnet-name> with the name of the subnet.
adminUsername : Replace <admin-user> with the admin username of the VM.
You'll also be prompted to enter adminPassword .

NOTE
When the deployment finishes, you should see a message indicating the deployment succeeded.

Review deployed resources


Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
CLI
PowerShell

az resource list --resource-group exampleRG

Clean up resources
When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and
its resources.

CLI
PowerShell

az group delete --name exampleRG

Next steps
For a step-by-step tutorial that guides you through the process of creating a Bicep file with Visual Studio Code,
see:
Quickstart: Create Bicep files with Visual Studio Code
For other ways to deploy a SQL Server VM, see:
Azure portal
PowerShell
To learn more, see an overview of SQL Server on Azure VMs.
Quickstart: Create SQL Server VM using an ARM
template
7/12/2022 • 6 minutes to read • Edit Online

Use this Azure Resource Manager template (ARM template) to deploy a SQL Server on Azure Virtual Machine
(VM).
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment
without writing the sequence of programming commands to create the deployment.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.

Prerequisites
The SQL Server VM ARM template requires the following:
The latest version of the Azure CLI and/or PowerShell.
A preconfigured resource group with a prepared virtual network and subnet.
An Azure subscription. If you don't have one, create a free account before you begin.

Review the template


The template used in this quickstart is from Azure Quickstart Templates.

{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.6.18.56646",
"templateHash": "2543092560055024764"
}
},
"parameters": {
"virtualMachineName": {
"type": "string",
"defaultValue": "myVM",
"metadata": {
"description": "The name of the VM"
}
},
"virtualMachineSize": {
"type": "string",
"defaultValue": "Standard_D8s_v3",
"metadata": {
"description": "The virtual machine size."
}
},
"existingVirtualNetworkName": {
"type": "string",
"metadata": {
"metadata": {
"description": "Specify the name of an existing VNet in the same resource group"
}
},
"existingVnetResourceGroup": {
"type": "string",
"defaultValue": "[resourceGroup().name]",
"metadata": {
"description": "Specify the resrouce group of the existing VNet"
}
},
"existingSubnetName": {
"type": "string",
"metadata": {
"description": "Specify the name of the Subnet Name"
}
},
"imageOffer": {
"type": "string",
"defaultValue": "sql2019-ws2019",
"allowedValues": [
"sql2019-ws2019",
"sql2017-ws2019",
"SQL2017-WS2016",
"SQL2016SP1-WS2016",
"SQL2016SP2-WS2016",
"SQL2014SP3-WS2012R2",
"SQL2014SP2-WS2012R2"
],
"metadata": {
"description": "Windows Server and SQL Offer"
}
},
"sqlSku": {
"type": "string",
"defaultValue": "Standard",
"allowedValues": [
"Standard",
"Enterprise",
"SQLDEV",
"Web",
"Express"
],
"metadata": {
"description": "SQL Server Sku"
}
},
"adminUsername": {
"type": "string",
"metadata": {
"description": "The admin user name of the VM"
}
},
"adminPassword": {
"type": "secureString",
"metadata": {
"description": "The admin password of the VM"
}
},
"storageWorkloadType": {
"type": "string",
"defaultValue": "General",
"allowedValues": [
"General",
"OLTP",
"DW"
],
"metadata": {
"description": "SQL Server Workload Type"
}
}
},
"sqlDataDisksCount": {
"type": "int",
"defaultValue": 1,
"maxValue": 8,
"minValue": 1,
"metadata": {
"description": "Amount of data disks (1TB each) for SQL Data files"
}
},
"dataPath": {
"type": "string",
"defaultValue": "F:\\SQLData",
"metadata": {
"description": "Path for SQL Data files. Please choose drive letter from F to Z, and other drives
from A to E are reserved for system"
}
},
"sqlLogDisksCount": {
"type": "int",
"defaultValue": 1,
"maxValue": 8,
"minValue": 1,
"metadata": {
"description": "Amount of data disks (1TB each) for SQL Log files"
}
},
"logPath": {
"type": "string",
"defaultValue": "G:\\SQLLog",
"metadata": {
"description": "Path for SQL Log files. Please choose drive letter from F to Z and different than
the one used for SQL data. Drive letter from A to E are reserved for system"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"networkInterfaceName": "[format('{0}-nic', parameters('virtualMachineName'))]",
"networkSecurityGroupName": "[format('{0}-nsg', parameters('virtualMachineName'))]",
"networkSecurityGroupRules": [
{
"name": "RDP",
"properties": {
"priority": 300,
"protocol": "Tcp",
"access": "Allow",
"direction": "Inbound",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "3389"
}
}
],
"publicIpAddressName": "[format('{0}-publicip-{1}', parameters('virtualMachineName'),
uniqueString(parameters('virtualMachineName')))]",
"publicIpAddressType": "Dynamic",
"publicIpAddressSku": "Basic",
"diskConfigurationType": "NEW",
"nsgId": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]",
"subnetRef": "[resourceId(parameters('existingVnetResourceGroup'),
"subnetRef": "[resourceId(parameters('existingVnetResourceGroup'),
'Microsoft.Network/virtualNetWorks/subnets', parameters('existingVirtualNetworkName'),
parameters('existingSubnetName'))]",
"dataDisksLuns": "[array(range(0, parameters('sqlDataDisksCount')))]",
"logDisksLuns": "[array(range(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount')))]",
"dataDisks": {
"createOption": "Empty",
"caching": "ReadOnly",
"writeAcceleratorEnabled": false,
"storageAccountType": "Premium_LRS",
"diskSizeGB": 1023
},
"tempDbPath": "D:\\SQLTemp"
},
"resources": [
{
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2021-08-01",
"name": "[variables('publicIpAddressName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[variables('publicIpAddressSku')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIpAddressType')]"
}
},
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2021-08-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": "[variables('networkSecurityGroupRules')]"
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2021-08-01",
"name": "[variables('networkInterfaceName')]",
"location": "[parameters('location')]",
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',
variables('publicIpAddressName'))]"
}
}
}
],
"enableAcceleratedNetworking": true,
"networkSecurityGroup": {
"id": "[variables('nsgId')]"
}
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]",
"[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIpAddressName'))]"
]
},
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2021-11-01",
"name": "[parameters('virtualMachineName')]",
"location": "[parameters('location')]",
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('virtualMachineSize')]"
},
"storageProfile": {
"copy": [
{
"name": "dataDisks",
"count": "[length(range(0, add(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount'))))]",
"input": {
"lun": "[range(0, add(parameters('sqlDataDisksCount'), parameters('sqlLogDisksCount')))
[copyIndex('dataDisks')]]",
"createOption": "[variables('dataDisks').createOption]",
"caching": "[if(greaterOrEquals(range(0, add(parameters('sqlDataDisksCount'),
parameters('sqlLogDisksCount')))[copyIndex('dataDisks')], parameters('sqlDataDisksCount')), 'None',
variables('dataDisks').caching)]",
"writeAcceleratorEnabled": "[variables('dataDisks').writeAcceleratorEnabled]",
"diskSizeGB": "[variables('dataDisks').diskSizeGB]",
"managedDisk": {
"storageAccountType": "[variables('dataDisks').storageAccountType]"
}
}
}
],
"osDisk": {
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftSQLServer",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('sqlSku')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('networkInterfaceName'))]"
}
]
},
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": true,
"provisionVMAgent": true
}
}
},
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces', variables('networkInterfaceName'))]"
]
},
{
"type": "Microsoft.SqlVirtualMachine/sqlVirtualMachines",
"apiVersion": "2021-11-01-preview",
"name": "[parameters('virtualMachineName')]",
"location": "[parameters('location')]",
"properties": {
"virtualMachineResourceId": "[resourceId('Microsoft.Compute/virtualMachines',
parameters('virtualMachineName'))]",
"sqlManagement": "Full",
"sqlServerLicenseType": "PAYG",
"storageConfigurationSettings": {
"diskConfigurationType": "[variables('diskConfigurationType')]",
"storageWorkloadType": "[parameters('storageWorkloadType')]",
"sqlDataSettings": {
"luns": "[variables('dataDisksLuns')]",
"defaultFilePath": "[parameters('dataPath')]"
},
"sqlLogSettings": {
"luns": "[variables('logDisksLuns')]",
"defaultFilePath": "[parameters('logPath')]"
},
"sqlTempDbSettings": {
"defaultFilePath": "[variables('tempDbPath')]"
}
}
},
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]"
]
}
],
"outputs": {
"adminUsername": {
"type": "string",
"value": "[parameters('adminUsername')]"
}
}
}

Five Azure resources are defined in the template:


Microsoft.Network/publicIpAddresses: Creates a public IP address.
Microsoft.Network/networkSecurityGroups: Creates a network security group.
Microsoft.Network/networkInterfaces: Configures the network interface.
Microsoft.Compute/virtualMachines: Creates a virtual machine in Azure.
Microsoft.SqlVirtualMachine/SqlVirtualMachines: registers the virtual machine with the SQL IaaS Agent
extension.
More SQL Server on Azure VM templates can be found in the quickstart template gallery.

Deploy the template


1. Select the following image to sign in to Azure and open a template. The template creates a virtual
machine with the intended SQL Server version installed to it, and registered with the SQL IaaS Agent
extension.

2. Select or enter the following values.


Subscription : Select an Azure subscription.
Resource group : The prepared resource group for your SQL Server VM.
Region : Select a region. For example, Central US .
Vir tual Machine Name : Enter a name for SQL Server virtual machine.
Vir tual Machine Size : Choose the appropriate size for your virtual machine from the drop-down.
Existing Vir tual Network Name : Enter the name of the prepared virtual network for your SQL
Server VM.
Existing Vnet Resource Group : Enter the resource group where your virtual network was prepared.
Existing Subnet Name : The name of your prepared subnet.
Image Offer : Choose the SQL Server and Windows Server image that best suits your business needs.
SQL Sku : Choose the edition of SQL Server SKU that best suits your business needs.
Admin Username : The username for the administrator of the virtual machine.
Admin Password : The password used by the VM administrator account.
Storage Workload Type : The type of storage for the workload that best matches your business.
Sql Data Disks Count : The number of disks SQL Server uses for data files.
Data Path : The path for the SQL Server data files.
Sql Log Disks Count : The number of disks SQL Server uses for log files.
Log Path : The path for the SQL Server log files.
Location : The location for all of the resources, this value should remain the default of
[resourceGroup().location] .
3. Select Review + create . After the SQL Server VM has been deployed successfully, you get a notification.
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use Azure
PowerShell, the Azure CLI, and REST API. To learn other deployment methods, see Deploy templates.

Review deployed resources


You can use the Azure CLI to check deployed resources.

echo "Enter the resource group where your SQL Server VM exists:" &&
read resourcegroupName &&
az resource list --resource-group $resourcegroupName

Clean up resources
When no longer needed, delete the resource group by using Azure CLI or Azure PowerShell:

CLI
PowerShell

echo "Enter the Resource Group name:" &&


read resourceGroupName &&
az group delete --name $resourceGroupName &&
echo "Press [ENTER] to continue ..."

Next steps
For a step-by-step tutorial that guides you through the process of creating a template, see:
Tutorial: Create and deploy your first ARM template
For other ways to deploy a SQL Server VM, see:
Azure portal
PowerShell
To learn more, see an overview of SQL Server on Azure VMs.
Business continuity and HADR for SQL Server on
Azure Virtual Machines
7/12/2022 • 10 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


Business continuity means continuing your business in the event of a disaster, planning for recovery, and
ensuring that your data is highly available. SQL Server on Azure Virtual Machines can help lower the cost of a
high-availability and disaster recovery (HADR) database solution.
Most SQL Server HADR solutions are supported on virtual machines (VMs), as both Azure-only and hybrid
solutions. In an Azure-only solution, the entire HADR system runs in Azure. In a hybrid configuration, part of the
solution runs in Azure and the other part runs on-premises in your organization. The flexibility of the Azure
environment enables you to move partially or completely to Azure to satisfy the budget and HADR
requirements of your SQL Server database systems.
This article compares and contrasts the business continuity solutions available for SQL Server on Azure VMs.

Overview
It's up to you to ensure that your database system has the HADR capabilities that the service-level agreement
(SLA) requires. The fact that Azure provides high-availability mechanisms, such as service healing for cloud
services and failure recovery detection for virtual machines, does not itself guarantee that you can meet the
SLA. Although these mechanisms help protect the high availability of the virtual machine, they don't protect the
availability of SQL Server running inside the VM.
It's possible for the SQL Server instance to fail while the VM is online and healthy. Even the high-availability
mechanisms provided by Azure allow for downtime of the VMs due to events like recovery from software or
hardware failures and operating system upgrades.
Geo-redundant storage (GRS) in Azure is implemented with a feature called geo-replication. GRS might not be
an adequate disaster recovery solution for your databases. Because geo-replication sends data asynchronously,
recent updates can be lost in a disaster. More information about geo-replication limitations is covered in the
Geo-replication support section.

NOTE
It's now possible to lift and shift both your failover cluster instance and availability group solution to SQL Server on Azure
VMs using Azure Migrate.

Deployment architectures
Azure supports these SQL Server technologies for business continuity:
Always On availability groups
Always On failover cluster instances (FCIs)
Log shipping
SQL Server backup and restore with Azure Blob storage
Database mirroring - Deprecated in SQL Server 2016
Azure Site Recovery
You can combine the technologies to implement a SQL Server solution that has both high-availability and
disaster recovery capabilities. Depending on the technology that you use, a hybrid deployment might require a
VPN tunnel with the Azure virtual network. The following sections show you some example deployment
architectures.

Azure only: High-availability solutions


You can have a high-availability solution for SQL Server at a database level with Always On availability groups.
You can also create a high-availability solution at an instance level with Always On failover cluster instances. For
additional protection, you can create redundancy at both levels by creating availability groups on failover cluster
instances.

T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Availability groups Availability replicas running in Azure VMs in the same region
provide high availability. You need to configure a domain
controller VM, because Windows failover clustering requires
an Active Directory domain.

For higher redundancy and availability, the Azure VMs can


be deployed in different availability zones as documented in
the availability group overview.

To get started, review theavailability group tutorial.


T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Failover cluster instances Failover cluster instances are supported on SQL Server VMs.
Because the FCI feature requires shared storage, five
solutions will work with SQL Server on Azure VMs:

- Using Azure shared disks for Windows Server 2019. Shared


managed disks are an Azure product that allows attaching a
managed disk to multiple virtual machines simultaneously.
VMs in the cluster can read or write to your attached disk
based on the reservation chosen by the clustered application
through SCSI Persistent Reservations (SCSI PR). SCSI PR is an
industry-standard storage solution that's used by
applications running on a storage area network (SAN) on-
premises. Enabling SCSI PR on a managed disk allows you to
migrate these applications to Azure as is.

- Using Storage Spaces Direct (S2D) to provide a software-


based virtual SAN for Windows Server 2016 and later.

- Using a Premium file share for Windows Server 2012 and


later. Premium file shares are SSD backed, have consistently
low latency, and are fully supported for use with FCI.

- Using storage supported by a partner solution for


clustering. For a specific example that uses SIOS DataKeeper,
see the blog entry Failover clustering and SIOS DataKeeper.

- Using shared block storage for a remote iSCSI target via


Azure ExpressRoute. For example, NetApp Private Storage
(NPS) exposes an iSCSI target via ExpressRoute with Equinix
to Azure VMs.

For shared storage and data replication solutions from


Microsoft partners, contact the vendor for any issues related
to accessing data on failover.

To get started, prepare your VM for FCI

Azure only: Disaster recovery solutions


You can have a disaster recovery solution for your SQL Server databases in Azure by using availability groups,
database mirroring, or backup and restore with storage blobs.

T EC H N O LO GY EXA M P L E A RC H IT EC T URES
T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Availability groups Availability replicas running across multiple datacenters in


Azure VMs for disaster recovery. This cross-region solution
helps protect against a complete site outage.

Within a region, all replicas should be within the same cloud


service and the same virtual network. Because each region
will have a separate virtual network, these solutions require
network-to-network connectivity. For more information, see
Configure a network-to-network connection by using the
Azure portal. For detailed instructions, see Configure a SQL
Server Always On availability group across different Azure
regions.

Database mirroring Principal and mirror and servers running in different


datacenters for disaster recovery. You must deploy them by
using server certificates. SQL Server database mirroring is
not supported for SQL Server 2008 or SQL Server 2008 R2
on an Azure VM.

Backup and restore with Azure Blob storage Production databases backed up directly to Blob storage in a
different datacenter for disaster recovery.

For more information, see Backup and restore for SQL


Server on Azure VMs.
T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Replicate and fail over SQL Ser ver to Azure with Production SQL Server instance in one Azure datacenter
Azure Site Recover y replicated directly to Azure Storage in a different Azure
datacenter for disaster recovery.

For more information, see Protect SQL Server using SQL


Server disaster recovery and Azure Site Recovery.

Hybrid IT: Disaster recovery solutions


You can have a disaster recovery solution for your SQL Server databases in a hybrid IT environment by using
availability groups, database mirroring, log shipping, and backup and restore with Azure Blob storage.

T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Availability groups Some availability replicas running in Azure VMs and other
replicas running on-premises for cross-site disaster recovery.
The production site can be either on-premises or in an Azure
datacenter.

Because all availability replicas must be in the same failover


cluster, the cluster must span both networks (a multi-subnet
failover cluster). This configuration requires a VPN
connection between Azure and the on-premises network.

For successful disaster recovery of your databases, you


should also install a replica domain controller at the disaster
recovery site. To get started, review theavailability group
tutorial.
T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Database mirroring One partner running in an Azure VM and the other running
on-premises for cross-site disaster recovery by using server
certificates. Partners don't need to be in the same Active
Directory domain, and no VPN connection is required.

Another database mirroring scenario involves one partner


running in an Azure VM and the other running on-premises
in the same Active Directory domain for cross-site disaster
recovery. A VPN connection between the Azure virtual
network and the on-premises network is required.

For successful disaster recovery of your databases, you


should also install a replica domain controller at the disaster
recovery site. SQL Server database mirroring is not
supported for SQL Server 2008 or SQL Server 2008 R2 on
an Azure VM.

Log shipping One server running in an Azure VM and the other running
on-premises for cross-site disaster recovery. Log shipping
depends on Windows file sharing, so a VPN connection
between the Azure virtual network and the on-premises
network is required.

For successful disaster recovery of your databases, you


should also install a replica domain controller at the disaster
recovery site.

Backup and restore with Azure Blob storage On-premises production databases backed up directly to
Azure Blob storage for disaster recovery.

For more information, see Backup and restore for SQL


Server on Azure Virtual Machines.
T EC H N O LO GY EXA M P L E A RC H IT EC T URES

Replicate and fail over SQL Ser ver to Azure with On-premises production SQL Server instance replicated
Azure Site Recover y directly to Azure Storage for disaster recovery.

For more information, see Protect SQL Server using SQL


Server disaster recovery and Azure Site Recovery.

Free DR replica in Azure


If you have Software Assurance, you can implement hybrid disaster recovery (DR) plans with SQL Server
without incurring additional licensing costs for the passive disaster recovery instance.
For example, you can have two free passive secondaries when all three replicas are hosted in Azure:

Or you can configure a hybrid failover environment, with a licensed primary on-premises, one free passive for
HA, one free passive for DR on-premises, and one free passive for DR in Azure:
For more information, see the product licensing terms.
To enable this benefit, go to your SQL Server virtual machine resource. Select Configure under Settings , and
then choose the Disaster Recover y option under SQL Ser ver License . Select the check box to confirm that
this SQL Server VM will be used as a passive replica, and then select Apply to save your settings.

Important considerations for SQL Server HADR in Azure


Azure VMs, storage, and networking have different operational characteristics than an on-premises, non-
virtualized IT infrastructure. A successful implementation of an HADR SQL Server solution in Azure requires that
you understand these differences and design your solution to accommodate them.
High-availability nodes in an availability set
Availability sets in Azure enable you to place the high-availability nodes into separate fault domains and update
domains. The Azure platform assigns an update domain and a fault domain to each virtual machine in your
availability set. This configuration within a datacenter ensures that during either a planned or unplanned
maintenance event, at least one virtual machine is available and meets the Azure SLA of 99.95 percent.
To configure a high-availability setup, place all participating SQL Server virtual machines in the same availability
set to avoid application or data loss during a maintenance event. Only nodes in the same cloud service can
participate in the same availability set. For more information, see Manage the availability of virtual machines.
High-availability nodes in an availability zone
Availability zones are unique physical locations within an Azure region. Each zone consists of one or more
datacenters equipped with independent power, cooling, and networking. The physical separation of availability
zones within a region helps protect applications and data from datacenter failures by ensuring that at least one
virtual machine is available and meets the Azure SLA of 99.99 percent.
To configure high availability, place participating SQL Server virtual machines spread across availability zones in
the region. There will be additional charges for network-to-network transfers between availability zones. For
more information, see Availability zones.
Network latency in hybrid IT
Deploy your HADR solution with the assumption that there might be periods of high network latency between
your on-premises network and Azure. When you're deploying replicas to Azure, use asynchronous commit
instead of synchronous commit for the synchronization mode. When you're deploying database mirroring
servers both on-premises and in Azure, use the high-performance mode instead of the high-safety mode.
See the HADR configuration best practices for cluster and HADR settings that can help accommodate the cloud
environment.
Geo -replication support
Geo-replication in Azure disks does not support the data file and log file of the same database to be stored on
separate disks. GRS replicates changes on each disk independently and asynchronously. This mechanism
guarantees the write order within a single disk on the geo-replicated copy, but not across geo-replicated copies
of multiple disks. If you configure a database to store its data file and its log file on separate disks, the recovered
disks after a disaster might contain a more up-to-date copy of the data file than the log file, which breaks the
write-ahead log in SQL Server and the ACID properties (atomicity, consistency, isolation, and durability) of
transactions.
If you don't have the option to disable geo-replication on the storage account, keep all data and log files for a
database on the same disk. If you must use more than one disk due to the size of the database, deploy one of
the disaster recovery solutions listed earlier to ensure data redundancy.

Next steps
Decide if an availability group or a failover cluster instance is the best business continuity solution for your
business. Then review the best practices for configuring your environment for high availability and disaster
recovery.
Backup and restore for SQL Server on Azure VMs
7/12/2022 • 7 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


This article provides guidance on the backup and restore options available for SQL Server running on a
Windows virtual machine (VM) in Azure. Azure Storage maintains three copies of every Azure VM disk to
guarantee protection against data loss or physical data corruption. Thus, unlike SQL Server on-premises, you
don't need to focus on hardware failures. However, you should still back up your SQL Server databases to
protect against application or user errors, such as inadvertent data insertions or deletions. In this situation, it is
important to be able to restore to a specific point in time.
The first part of this article provides an overview of the available backup and restore options. This is followed by
sections that provide more information on each strategy.

Backup and restore options


The following table provides information on various backup and restore options for SQL Server on Azure VMs:

ST RAT EGY SQ L VERSIO N S DESC RIP T IO N

Automated Backup 2014 Automated Backup allows you to


2016 schedule regular backups for all
2017 databases on a SQL Server VM.
2019 Backups are stored in Azure storage
for up to 30 days. Beginning with SQL
Server 2016, Automated Backup v2
offers additional options such as
configuring manual scheduling and the
frequency of full and log backups.

Azure Backup for SQL VMs 2008 Azure Backup provides an Enterprise
2012 class backup capability for SQL Server
2014 on Azure VMs. With this service, you
2016 can centrally manage backups for
2017 multiple servers and thousands of
2019 databases. Databases can be restored
to a specific point in time in the portal.
It offers a customizable retention
policy that can maintain backups for
years.

Manual backup All Depending on your version of SQL


Server, there are various techniques to
manually backup and restore SQL
Server on Azure VM. In this scenario,
you are responsible for how your
databases are backed up and the
storage location and management of
these backups.

The following sections describe each option in more detail. The final section of this article provides a summary
in the form of a feature matrix.
Automated Backup
Automated Backup provides an automatic backup service for SQL Server Standard and Enterprise editions
running on a Windows VM in Azure. This service is provided by the SQL Server IaaS Agent Extension, which is
automatically installed on SQL Server Windows virtual machine images in the Azure portal.
All databases are backed up to an Azure storage account that you configure. Backups can be encrypted and
retained for up to 30 days.
SQL Server 2016 and higher VMs offer more customization options with Automated Backup v2. These
improvements include:
System database backups
Manual backup schedule and time window
Full and log file backup frequency
To restore a database, you must locate the required backup file(s) in the storage account and perform a restore
on your SQL VM using SQL Server Management Studio (SSMS) or Transact-SQL commands.
For more information on how to configure Automated Backup for SQL VMs, see one of the following articles:
SQL Ser ver 2016/2017 : Automated Backup v2 for Azure Virtual Machines
SQL Ser ver 2014 : Automated Backup for SQL Server 2014 Virtual Machines

Azure Backup for SQL VMs


Azure Backup provides an Enterprise class backup capability for SQL Server on Azure VMs. All backups are
stored and managed in a Recovery Services vault. There are several advantages that this solution provides,
especially for Enterprises:
Zero-infrastructure backup : You do not have to manage backup servers or storage locations.
Scale : Protect many SQL VMs and thousands of databases.
Pay-As-You-Go : This capability is a separate service provided by Azure Backup, but as with all Azure
services, you only pay for what you use.
Central management and monitoring : Centrally manage all of your backups, including other workloads
that Azure Backup supports, from a single dashboard in Azure.
Policy driven backup and retention : Create standard backup policies for regular backups. Establish
retention policies to maintain backups for years.
Suppor t for SQL Always On : Detect and protect a SQL Server Always On configuration and honor the
backup Availability Group backup preference.
15-minute Recover y Point Objective (RPO) : Configure SQL transaction log backups up to every 15
minutes.
Point in time restore : Use the portal to recover databases to a specific point in time without having to
manually restore multiple full, differential, and log backups.
Consolidated email aler ts for failures : Configure consolidated email notifications for any failures.
Azure role-based access control : Determine who can manage backup and restore operations through the
portal.
This Azure Backup solution for SQL VMs is generally available. For more information, see Back up SQL Server
database to Azure.

Manual backup
If you want to manually manage backup and restore operations on your SQL VMs, there are several options
depending on the version of SQL Server you are using. For an overview of backup and restore, see one of the
following articles based on your version of SQL Server:
Backup and restore for SQL Server 2016 and later
Backup and restore for SQL Server 2014
Backup and restore for SQL Server 2012
Backup and restore for SQL Server SQL Server 2008 R2
Backup and restore for SQL Server 2008
The following sections describe several manual backup and restore options in more detail.
Backup to attached disks
For SQL Server on Azure VMs, you can use native backup and restore techniques using attached disks on the
VM for the destination of the backup files. However, there is a limit to the number of disks you can attach to an
Azure virtual machine, based on the size of the virtual machine. There is also the overhead of disk management
to consider.
For an example of how to manually create a full database backup using SQL Server Management Studio (SSMS)
or Transact-SQL, see Create a Full Database Backup.
Backup to URL
Beginning with SQL Server 2012 SP1 CU2, you can back up and restore directly to Microsoft Azure Blob storage,
which is also known as backup to URL. SQL Server 2016 also introduced the following enhancements for this
feature:

2016 EN H A N C EM EN T DETA IL S

Striping When backing up to Microsoft Azure blob storage, SQL


Server 2016 supports backing up to multiple blobs to
enable backing up large databases, up to a maximum of 12.8
TB.

Snapshot Backup Through the use of Azure snapshots, SQL Server File-
Snapshot Backup provides nearly instantaneous backups
and rapid restores for database files stored using the Azure
Blob storage service. This capability enables you to simplify
your backup and restore policies. File-snapshot backup also
supports point in time restore. For more information, see
Snapshot Backups for Database Files in Azure.

For more information, see the one of the following articles based on your version of SQL Server:
SQL Ser ver 2016/2017 : SQL Server Backup to URL
SQL Ser ver 2014 : SQL Server 2014 Backup to URL
SQL Ser ver 2012 : SQL Server 2012 Backup to URL
Managed Backup
Beginning with SQL Server 2014, Managed Backup automates the creation of backups to Azure storage. Behind
the scenes, Managed Backup makes use of the Backup to URL feature described in the previous section of this
article. Managed Backup is also the underlying feature that supports the SQL Server VM Automated Backup
service.
Beginning in SQL Server 2016, Managed Backup got additional options for scheduling, system database backup,
and full and log backup frequency.
For more information, see one of the following articles based on your version of SQL Server:
Managed Backup to Microsoft Azure for SQL Server 2016 and later
Managed Backup to Microsoft Azure for SQL Server 2014

Decision matrix
The following table summarizes the capabilities of each backup and restore option for SQL Server virtual
machines in Azure.

O P T IO N A UTO M AT ED B A C K UP A Z URE B A C K UP F O R SQ L M A N UA L B A C K UP

Requires additional Azure


service

Configure backup policy in


Azure portal

Restore databases in Azure


portal

Manage multiple servers in


one dashboard

Point-in-time restore

15-minute Recovery Point


Objective (RPO)

Short-term backup
retention policy (days)

Long-term backup
retention policy (months,
years)

Built-in support for SQL


Server Always On

Backup to Azure Storage (automatic) (automatic) (customer managed)


account(s)

Management of storage
and backup files

Backup to attached disks on


the VM

Central customizable
backup reports

Consolidated email alerts


for failures

Customize monitoring
based on Azure Monitor
logs
O P T IO N A UTO M AT ED B A C K UP A Z URE B A C K UP F O R SQ L M A N UA L B A C K UP

Monitor backup jobs with


SSMS or Transact-SQL
scripts

Restore databases with


SSMS or Transact-SQL
scripts

Next steps
If you are planning your deployment of SQL Server on Azure VM, you can find provisioning guidance in the
following guide: How to provision a Windows SQL Server virtual machine in the Azure portal.
Although backup and restore can be used to migrate your data, there are potentially easier data migration paths
to SQL Server on VM. For a full discussion of migration options and recommendations, see Migrating a
Database to SQL Server on Azure VM.
Use Azure Storage for SQL Server backup and
restore
7/12/2022 • 5 minutes to read • Edit Online

APPLIES TO: SQL Server on Azure VM


Starting with SQL Server 2012 SP1 CU2, you can now write back up SQL Server databases directly to Azure
Blob storage. Use this functionality to back up to and restore from Azure Blob storage. Back up to the cloud
offers benefits of availability, limitless geo-replicated off-site storage, and ease of migration of data to and from
the cloud. You can issue BACKUP or RESTORE statements by using Transact-SQL or SMO.

Overview
SQL Server 2016 introduces new capabilities; you can use file-snapshot backup to perform nearly instantaneous
backups and incredibly quick restores.
This topic explains why you might choose to use Azure Storage for SQL Server backups and then describes the
components involved. You can use the resources provided at the end of the article to access walk-throughs and
additional information to start using this service with your SQL Server backups.

Benefits of using Azure Blob storage for SQL Server backups


There are several challenges that you face when backing up SQL Server. These challenges include storage
management, risk of storage failure, access to off-site storage, and hardware configuration. Many of these
challenges are addressed by using Azure Blob storage for SQL Server backups. Consider the following benefits:
Ease of use : Storing your backups in Azure blobs can be a convenient, flexible, and easy to access off-site
option. Creating off-site storage for your SQL Server backups can be as easy as modifying your existing
scripts/jobs to use the BACKUP TO URL syntax. Off-site storage should typically be far enough from the
production database location to prevent a single disaster that might impact both the off-site and production
database locations. By choosing to geo-replicate your Azure blobs, you have an extra layer of protection in
the event of a disaster that could affect the whole region.
Backup archive : Azure Blob storage offers a better alternative to the often used tape option to archive
backups. Tape storage might require physical transportation to an off-site facility and measures to protect the
media. Storing your backups in Azure Blob storage provides an instant, highly available, and a durable
archiving option.
Managed hardware : There is no overhead of hardware management with Azure services. Azure services
manage the hardware and provide geo-replication for redundancy and protection against hardware failures.
Unlimited storage : By enabling a direct backup to Azure blobs, you have access to virtually unlimited
storage. Alternatively, backing up to an Azure virtual machine disk has limits based on machine size. There is
a limit to the number of disks you can attach to an Azure virtual machine for backups. This limit is 16 disks
for an extra large instance and fewer for smaller instances.
Backup availability : Backups stored in Azure blobs are available from anywhere and at any time and can
easily be accessed for restores to a SQL Server instance, without the need for database attach/detach or
downloading and attaching the VHD.
Cost : Pay only for the service that is used. Can be cost-effective as an off-site and backup archive option. See
the Azure pricing calculator, and the Azure Pricing article for more information.
Storage snapshots : When database files are stored in an Azure blob and you are using SQL Server 2016,
you can use file-snapshot backup to perform nearly instantaneous backups and incredibly quick restores.
For more details, see SQL Server Backup and Restore with Azure Blob storage.
The following two sections introduce Azure Blob storage, including the required SQL Server components. It is
important to understand the components and their interaction to successfully use backup and restore from
Azure Blob storage.
Azure Blob storage components
The following Azure components are used when backing up to Azure Blob storage.

C O M P O N EN T DESC RIP T IO N

Storage account The storage account is the starting point for all storage
services. To access Azure Blob storage, first create an Azure
Storage account. SQL Server is agnostic to the type of
storage redundancy used. Backup to Page blobs and block
blobs is supported for every storage redundancy
(LRS\ZRS\GRS\RA-GRS\RA-GZRS\etc.). For more information
about Azure Blob storage, see How to use Azure Blob
storage.

Container A container provides a grouping of a set of blobs, and can


store an unlimited number of Blobs. To write a SQL Server
backup to Azure Blob storage, you must have at least the
root container created.

Blob A file of any type and size. Blobs are addressable using the
following URL format:
https://<storageaccount>.blob.core.windows.net/<container>/<blob>
. For more information about page Blobs, see Understanding
Block and Page Blobs

SQL Server components


The following SQL Server components are used when backing up to Azure Blob storage.

C O M P O N EN T DESC RIP T IO N

URL A URL specifies a Uniform Resource Identifier (URI) to a


unique backup file. The URL provides the location and name
of the SQL Server backup file. The URL must point to an
actual blob, not just a container. If the blob does not exist,
Azure creates it. If an existing blob is specified, the backup
command fails, unless the WITH FORMAT opt

You might also like