You are on page 1of 115

Contents

Anomaly Detector Documentation


Overview
What is Anomaly Detector (multivariate)?
What is Anomaly Detector (univariate)?
Pricing
Quickstarts
Multivariate client libraries
Univariate client libraries
Samples
Interactive demo (univariate)
Jupyter Notebook (univariate)
Jupyter Notebook (multivariate)
Azure samples
How-to guides
Identify anomalies in your data
Deploy Anomaly Detector
Deploy to IoT Edge
Containers
Install and run containers
Configure containers
Use container instances
All Cognitive Services containers documentation
Enterprise readiness
Set up Virtual Networks
Use Azure AD authentication
Concepts
Anomaly Detector API best practices (multivariate)
Anomaly Detector API best practices (univariate)
Predictive maintenance architecture (multivariate)
Troubleshooting multivariate
Spectral Residual (SR) and Convolutional Neural Net (CNN) anomaly detection
Multivariate time-series anomaly detection via graph attention network
Tutorials
Visualize anomalies as a batch using Power BI
Reference
Anomaly Detector REST API
Anomaly Detector REST API preview (multivariate)
.NET
Java
Python
Node.js
Resources
Enterprise readiness
Region support
Compliance and certification
Support and help options
Microsoft Learn modules
User-generated content
Join the Anomaly Detector Advisors group on Microsoft Teams
Reference solution architecture
Azure updates
Technical blogs
Introducing Azure Anomaly Detector API
Overview of SR-CNN algorithm
Multivariate time series Anomaly Detection (public
preview)
4/12/2021 • 2 minutes to read • Edit Online

The first release of the Azure Cognitive Services Anomaly Detector allowed you to build metrics monitoring
solutions using the easy-to-use univariate time series Anomaly Detector APIs. By allowing analysis of time series
individually, Anomaly Detector univariate provides simplicity and scalability.
The new multivariate anomaly detection APIs further enable developers by easily integrating advanced AI
for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled
data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as
key factors. This new capability helps you to proactively protect your complex systems such as software
applications, servers, factory machines, spacecraft, or even your business, from failures.
Imagine 20 sensors from an auto engine generating 20 different signals like vibration, temperature, fuel
pressure, etc. The readings of those signals individually may not tell you much about system level issues, but
together they can represent the health of the engine. When the interaction of those signals deviates outside the
usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The
underlying AI models are trained and customized using your data such that it understands the unique needs of
your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time
series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for
complex enterprise software, or business intelligence tools.

When to use multivariate versus univariate


Use univariate anomaly detection APIs, if your goal is to detect anomalies out of a normal pattern on each
individual time series purely based on their own historical data. Examples: you want to detect daily revenue
anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
POST /anomalydetector/v1.0/timeseries/last/detect
POST /anomalydetector/v1.0/timeseries/batch/detect
POST /anomalydetector/v1.0/timeseries/changepoint/detect

Use multivariate anomaly detection APIs below, if your goal is to detect system level anomalies from a group of
time series data. Particularly, when any individual time series won't tell you much, and you have to look at all
signals (a group of time series) holistically to determine a system level issue. Example: you have an expensive
physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of
different types of sensors. You would have to look at all those time series signals from those sensors to decide
whether there is system level issue.
POST /anomalydetector/v1.1-preview/multivariate/models
GET /anomalydetector/v1.1-preview/multivariate/models[?$skip][&$top]
GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}
POST/anomalydetector/v1.1-preview/multivariate/models/{modelId}/detect
GET /anomalydetector/v1.1-preview/multivariate/results/{resultId}
DELETE /anomalydetector/v1.1-preview/multivariate/models/{modelId}
GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}/export

Region support
The public preview of Anomaly Detector multivariate is currently available in three regions: West US2, East US2,
and West Europe.

Algorithms
Multivariate time series Anomaly Detection via Graph Attention Network

Join the Anomaly Detector community


Join the Anomaly Detector Advisors group on Microsoft Teams

Next steps
Quickstarts.
Best Practices: This article is about recommended patterns to use with the multivariate APIs.
What is the Anomaly Detector API?
4/18/2021 • 4 minutes to read • Edit Online

IMPORTANT
Transport Layer Security (TLS) 1.2 is now enforced for all HTTP requests to this service. For more information, see Azure
Cognitive Services security.

The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without
having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying
and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your
time series data, the API determines boundaries for anomaly detection, expected values, and which data points
are anomalies.

Using the Anomaly Detector doesn't require any prior experience in machine learning, and the RESTful API
enables you to easily integrate the service into your applications and processes.
This documentation contains the following types of articles:
The quickstarts are step-by-step instructions that let you make calls to the service and get results in a short
period of time.
The how-to guides contain instructions for using the service in more specific or customized ways.
The conceptual articles provide in-depth explanations of the service's functionality and features.
The tutorials are longer guides that show you how to use this service as a component in broader business
solutions.

Features
With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they
occur in real-time.

F EAT URE DESC RIP T IO N


F EAT URE DESC RIP T IO N

Anomaly detection in real-time. Detect anomalies in your streaming data by using previously
seen data points to determine if your latest one is an
anomaly. This operation generates a model using the data
points you send, and determines if the target point is an
anomaly. By calling the API with each new data point you
generate, you can monitor your data as it's created.

Detect anomalies throughout your data set as a batch. Use your time series to detect any anomalies that might
exist throughout your data. This operation generates a
model using your entire time series data, with each point
analyzed with the same model.

Detect change points throughout your data set as a batch. Use your time series to detect any trend change points that
exist in your data. This operation generates a model using
your entire time series data, with each point analyzed with
the same model.

Get additional information about your data. Get useful details about your data and any observed
anomalies, including expected values, anomaly boundaries,
and positions.

Adjust anomaly detection boundaries. The Anomaly Detector API automatically creates boundaries
for anomaly detection. Adjust these boundaries to increase
or decrease the API's sensitivity to data anomalies, and
better fit your data.

Demo
Check out this interactive demo to understand how Anomaly Detector works. To run the demo, you need to
create an Anomaly Detector resource and get the API key and endpoint.

Notebook
To learn how to call the Anomaly Detector API, try this Notebook. This Jupyter Notebook shows you how to send
an API request and visualize the result.
To run the Notebook, complete the following steps:
1. Get a valid Anomaly Detector API subscription key and an API endpoint. The section below has instructions
for signing up.
2. Sign in, and select Clone, in the upper right corner.
3. Uncheck the "public" option in the dialog box before completing the clone operation, otherwise your
notebook, including any subscription keys, will be public.
4. Select Run on free compute
5. Select one of the notebooks.
6. Add your valid Anomaly Detector API subscription key to the subscription_key variable.
7. Change the endpoint variable to your endpoint. For example:
https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect
8. On the top menu bar, select Cell , then Run All .

Workflow
The Anomaly Detector API is a RESTful web service, making it easy to call from any programming language that
can make HTTP requests and parse JSON.

NOTE
For best results when using the Anomaly Detector API, your JSON-formatted time series data should include:
data points separated by the same interval, with no more than 10% of the expected number of points missing.
at least 12 data points if your data doesn't have a clear seasonal pattern.
at least 4 pattern occurrences if your data does have a clear seasonal pattern.

You must have a Cognitive Services API account with access to the Anomaly Detector API. You can get your
subscription key from the Azure portal after creating your account.
After signing up:
1. Take your time series data and convert it into a valid JSON format. Use best practices when preparing your
data to get the best results.
2. Send a request to the Anomaly Detector API with your data.
3. Process the API response by parsing the returned JSON message.

Algorithms
See the following technical blogs for information about the algorithms used:
Introducing Azure Anomaly Detector API
Overview of SR-CNN algorithm in Azure Anomaly Detector
You can read the paper Time-Series Anomaly Detection Service at Microsoft (accepted by KDD 2019) to learn
more about the SR-CNN algorithms developed by Microsoft.

Service availability and redundancy


Is the Anomaly Detector service zone resilient?
Yes. The Anomaly Detector service is zone-resilient by default.
How do I configure the Anomaly Detector service to be zone -resilient?
No customer configuration is necessary to enable zone-resiliency. Zone-resiliency for Anomaly Detector
resources is available by default and managed by the service itself.

Deploy on premises using Docker containers


Use Anomaly Detector containers to deploy API features on-premises. Docker containers enable you to bring the
service closer to your data for compliance, security, or other operational reasons.

Join the Anomaly Detector community


Join the Anomaly Detector Advisors group on Microsoft Teams
See selected user generated content

Next steps
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
The Anomaly Detector API online demo
The Anomaly Detector REST API reference
Quickstart: Use the Anomaly Detector multivariate
client library
4/22/2021 • 23 minutes to read • Edit Online

Get started with the Anomaly Detector multivariate client library for C#. Follow these steps to install the package
and start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for C# to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (NuGet)

Prerequisites
Azure subscription - Create one for free
The current version of .NET Core
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and select the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. Paste your key and endpoint into the code below later in the quickstart. You can
use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create a new .NET Core application
In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console
app with the name anomaly-detector-quickstart-multivariate . This command creates a simple "Hello World"
project with a single C# source file: Program.cs.

dotnet new console -n anomaly-detector-quickstart-multivariate

Change your directory to the newly created app folder. You can build the application with:

dotnet build

The build output should contain no warnings or errors.


...
Build succeeded.
0 Warning(s)
0 Error(s)
...

Install the client library


Within the application directory, install the Anomaly Detector client library for .NET with the following
command:

dotnet add package Azure.AI.AnomalyDetector --version 3.0.0-preview.3

From the project directory, open the program.cs file and add the following using directives :

using System;
using System.Collections.Generic;
using System.Drawing.Text;
using System.IO;
using System.Linq;
using System.Linq.Expressions;
using System.Net.NetworkInformation;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using Azure.AI.AnomalyDetector.Models;
using Azure.Core.TestFramework;
using Microsoft.Identity.Client;
using NUnit.Framework;

In the application's main() method, create variables for your resource's Azure endpoint, your API key, and a
custom datasource.

string endpoint = "YOUR_API_KEY";


string apiKey = "YOUR_ENDPOINT";
string datasource = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";

To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:

T IM ESTA M P VA L UE

2019-04-01T00:00:00Z 5

2019-04-01T00:01:00Z 3.6

2019-04-01T00:02:00Z 4

... ...

Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.

Code examples
These code snippets show you how to do the following with the Anomaly Detector multivariate client library for
.NET:
Authenticate the client
Train the model
Detect anomalies
Export model
Delete model

Authenticate the client


Instantiate an Anomaly Detector client with your endpoint and key.

var endpointUri = new Uri(endpoint);


var credential = new AzureKeyCredential(apiKey)

AnomalyDetectorClient client = new AnomalyDetectorClient(endpointUri, credential);

Train the model


Create a new private async task as below to handle training your model. You will use TrainMultivariateModel to
train the model and GetMultivariateModelAysnc to check when training is complete.
private async Task trainAsync(AnomalyDetectorClient client, string datasource, DateTimeOffset start_time,
DateTimeOffset end_time, int max_tryout = 500)
{
try
{
Console.WriteLine("Training new model...");

int model_number = await getModelNumberAsync(client, false).ConfigureAwait(false);


Console.WriteLine(String.Format("{0} available models before training.", model_number));

ModelInfo data_feed = new ModelInfo(datasource, start_time, end_time);


Response response_header = client.TrainMultivariateModel(data_feed);
response_header.Headers.TryGetValue("Location", out string trained_model_id_path);
Guid trained_model_id = Guid.Parse(trained_model_id_path.Split('/').LastOrDefault());
Console.WriteLine(trained_model_id);

// Wait until the model is ready. It usually takes several minutes


Response<Model> get_response = await
client.GetMultivariateModelAsync(trained_model_id).ConfigureAwait(false);
ModelStatus? model_status = null;
int tryout_count = 0;
TimeSpan create_limit = new TimeSpan(0, 3, 0);
while (tryout_count < max_tryout & model_status != ModelStatus.Ready)
{
System.Threading.Thread.Sleep(10000);
get_response = await client.GetMultivariateModelAsync(trained_model_id).ConfigureAwait(false);
ModelInfo model_info = get_response.Value.ModelInfo;
Console.WriteLine(String.Format("model_id: {0}, createdTime: {1}, lastUpdateTime: {2}, status:
{3}.", get_response.Value.ModelId, get_response.Value.CreatedTime, get_response.Value.LastUpdatedTime,
model_info.Status));

if (model_info != null)
{
model_status = model_info.Status;
}
tryout_count += 1;
};
get_response = await client.GetMultivariateModelAsync(trained_model_id).ConfigureAwait(false);

if (model_status != ModelStatus.Ready)
{
Console.WriteLine(String.Format("Request timeout after {0} tryouts", max_tryout));
}

model_number = await getModelNumberAsync(client).ConfigureAwait(false);


Console.WriteLine(String.Format("{0} available models after training.", model_number));
return trained_model_id;
}
catch (Exception e)
{
Console.WriteLine(String.Format("Train error. {0}", e.Message));
throw new Exception(e.Message);
}
}

Detect anomalies
To detect anomalies using your newly trained model, create a private async Task named detectAsync . You will
create a new DetectionRequest and pass that as a parameter to DetectAnomalyAsync .
private async Task<DetectionResult> detectAsync(AnomalyDetectorClient client, string datasource, Guid
model_id, DateTimeOffset start_time, DateTimeOffset end_time, int max_tryout = 500)
{
try
{
Console.WriteLine("Start detect...");
Response<Model> get_response = await
client.GetMultivariateModelAsync(model_id).ConfigureAwait(false);

DetectionRequest detectionRequest = new DetectionRequest(datasource, start_time, end_time);


Response result_response = await client.DetectAnomalyAsync(model_id,
detectionRequest).ConfigureAwait(false);
var ok = result_response.Headers.TryGetValue("Location", out string result_id_path);
Guid result_id = Guid.Parse(result_id_path.Split('/').LastOrDefault());
// get detection result
Response<DetectionResult> result = await
client.GetDetectionResultAsync(result_id).ConfigureAwait(false);
int tryout_count = 0;
while (result.Value.Summary.Status != DetectionStatus.Ready & tryout_count < max_tryout)
{
System.Threading.Thread.Sleep(2000);
result = await client.GetDetectionResultAsync(result_id).ConfigureAwait(false);
tryout_count += 1;
}

if (result.Value.Summary.Status != DetectionStatus.Ready)
{
Console.WriteLine(String.Format("Request timeout after {0} tryouts", max_tryout));
return null;
}

return result.Value;
}
catch (Exception e)
{
Console.WriteLine(String.Format("Detection error. {0}", e.Message));
throw new Exception(e.Message);
}
}

Export model
To export the model you trained previously, create a private async Task named exportAysnc . You will use
ExportModelAsync and pass the model ID of the model you wish to export.
private async Task exportAsync(AnomalyDetectorClient client, Guid model_id, string model_path = "model.zip")
{
try
{
Stream model = await client.ExportModelAsync(model_id).ConfigureAwait(false);
if (model != null)
{
var fileStream = File.Create(model_path);
model.Seek(0, SeekOrigin.Begin);
model.CopyTo(fileStream);
fileStream.Close();
}
}
catch (Exception e)
{
Console.WriteLine(String.Format("Export error. {0}", e.Message));
throw new Exception(e.Message);
}
}

Delete model
To delete a model that you have created previously use DeleteMultivariateModelAsync and pass the model ID of
the model you wish to delete. To retrieve a model ID you can us getModelNumberAsync :

private async Task deleteAsync(AnomalyDetectorClient client, Guid model_id)


{
await client.DeleteMultivariateModelAsync(model_id).ConfigureAwait(false);
int model_number = await getModelNumberAsync(client).ConfigureAwait(false);
Console.WriteLine(String.Format("{0} available models after deletion.", model_number));
}
private async Task<int> getModelNumberAsync(AnomalyDetectorClient client, bool delete = false)
{
int count = 0;
AsyncPageable<ModelSnapshot> model_list = client.ListMultivariateModelAsync(0, 10000);
await foreach (ModelSnapshot x in model_list)
{
count += 1;
Console.WriteLine(String.Format("model_id: {0}, createdTime: {1}, lastUpdateTime: {2}.", x.ModelId,
x.CreatedTime, x.LastUpdatedTime));
if (delete & count < 4)
{
await client.DeleteMultivariateModelAsync(x.ModelId).ConfigureAwait(false);
}
}
return count;
}

Main method
Now that you have all the component parts, you need to add additional code to your main method to call your
newly created tasks.
{
//read endpoint and apiKey
string endpoint = "YOUR_API_KEY";
string apiKey = "YOUR_ENDPOINT";
string datasource = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";
Console.WriteLine(endpoint);
var endpointUri = new Uri(endpoint);
var credential = new AzureKeyCredential(apiKey);

//create client
AnomalyDetectorClient client = new AnomalyDetectorClient(endpointUri, credential);

// train
TimeSpan offset = new TimeSpan(0);
DateTimeOffset start_time = new DateTimeOffset(2021, 1, 1, 0, 0, 0, offset);
DateTimeOffset end_time = new DateTimeOffset(2021, 1, 2, 12, 0, 0, offset);
Guid? model_id_raw = null;
try
{
model_id_raw = await trainAsync(client, datasource, start_time, end_time).ConfigureAwait(false);
Console.WriteLine(model_id_raw);
Guid model_id = model_id_raw.GetValueOrDefault();

// detect
start_time = end_time;
end_time = new DateTimeOffset(2021, 1, 3, 0, 0, 0, offset);
DetectionResult result = await detectAsync(client, datasource, model_id, start_time,
end_time).ConfigureAwait(false);
if (result != null)
{
Console.WriteLine(String.Format("Result ID: {0}", result.ResultId));
Console.WriteLine(String.Format("Result summary: {0}", result.Summary));
Console.WriteLine(String.Format("Result length: {0}", result.Results.Count));
}

// export model
await exportAsync(client, model_id).ConfigureAwait(false);

// delete
await deleteAsync(client, model_id).ConfigureAwait(false);
}
catch (Exception e)
{
String msg = String.Format("Multivariate error. {0}", e.Message);
if (model_id_raw != null)
{
await deleteAsync(client, model_id_raw.GetValueOrDefault()).ConfigureAwait(false);
}
Console.WriteLine(msg);
throw new Exception(msg);
}
}

Run the application


Run the application with the dotnet run command from your application directory.

dotnet run

Next steps
Anomaly Detector multivariate best practices
Get started with the Anomaly Detector multivariate client library for JavaScript. Follow these steps to install the
package and start using the algorithms provided by the service. The new multivariate anomaly detection APIs
enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the
need for machine learning knowledge or labeled data. Dependencies and inter-correlations between different
signals are automatically counted as key factors. This helps you to proactively protect your complex systems
from failures.
Use the Anomaly Detector multivariate client library for JavaScript to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (npm) | Sample code

Prerequisites
Azure subscription - Create one for free
The current version of Node.js
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create a new Node.js application
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the npm init command to create a node application with a package.json file.

npm init

Create a file named index.js and import the following libraries: `

'use strict'

const fs = require('fs');
const parse = require("csv-parse/lib/sync");
const { AnomalyDetectorClient } = require('@azure/ai-anomaly-detector');
const { AzureKeyCredential } = require('@azure/core-auth');

Create variables your resource's Azure endpoint and key. Create another variable for the example data file.
const apiKey = "YOUR_API_KEY";
const endpoint = "YOUR_ENDPOINT";
const data_source = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";

To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:

T IM ESTA M P VA L UE

2019-04-01T00:00:00Z 5

2019-04-01T00:01:00Z 3.6

2019-04-01T00:02:00Z 4

... ...

Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.
Install the client library
Install the ms-rest-azure and azure-ai-anomalydetector NPM packages. The csv-parse library is also used in
this quickstart:

npm install @azure/ai-anomaly-detector csv-parse

Your app's package.json file will be updated with the dependencies.

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Train a model
Detect anomalies
Export model
Delete model

Authenticate the client


Instantiate a AnomalyDetectorClient object with your endpoint and credentials.

const client = new AnomalyDetectorClient(endpoint, new AzureKeyCredential(apiKey));


Train a model
Construct a model result
First we need to construct a model request. Make sure that start and end time align with your data source.

const Modelrequest = {
source: data_source,
startTime: new Date(2021,0,1,0,0,0),
endTime: new Date(2021,0,2,12,0,0),
slidingWindow:200
};

Train a new model


You will need to pass your model request to the Anomaly Detector client trainMultivariateModel method.

console.log("Training a new model...")


const train_response = await client.trainMultivariateModel(Modelrequest)
const model_id = train_response.location?.split("/").pop() ?? ""
console.log("New model ID: " + model_id)

To check if training of your model is complete you can track the model's status:

let model_response = await client.getMultivariateModel(model_id)


let model_status = model_response.modelInfo?.status

while (model_status != 'READY'){


await sleep(10000).then(() => {});
model_response = await client.getMultivariateModel(model_id)
model_status = model_response.modelInfo?.status
}

console.log("TRAINING FINISHED.")

Detect anomalies
Use the detectAnomaly and getDectectionResult functions to determine if there are any anomalies within your
datasource.

console.log("Start detecting...")
const detect_request = {
source: data_source,
startTime: new Date(2021,0,2,12,0,0),
endTime: new Date(2021,0,3,0,0,0)
};
const result_header = await client.detectAnomaly(model_id, detect_request)
const result_id = result_header.location?.split("/").pop() ?? ""
let result = await client.getDetectionResult(result_id)
let result_status = result.summary.status

while (result_status != 'READY'){


await sleep(2000).then(() => {});
result = await client.getDetectionResult(result_id)
result_status = result.summary.status
}

Export model
To export your trained model use the exportModel function.

const export_result = await client.exportModel(model_id)


const model_path = "model.zip"
const destination = fs.createWriteStream(model_path)
export_result.readableStreamBody?.pipe(destination)
console.log("New model has been exported to "+model_path+".")

Delete model
To delete an existing model that is available to the current resource use the deleteMultivariateModel function.

client.deleteMultivariateModel(model_id)
console.log("New model has been deleted.")

Run the application


Before running the application it can be helpful to check your code against the full sample code
Run the application with the node command on your quickstart file.

node index.js

Next steps
Anomaly Detector multivariate best practices
Get started with the Anomaly Detector multivariate client library for Python. Follow these steps to install the
package start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for Python to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (PyPi) | Sample code

Prerequisites
Python 3.x
The Pandas data analysis library
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Install the client library
After installing Python, you can install the client libraries with:

pip install pandas


pip install --upgrade azure-ai-anomalydetector

Create a new python application


Create a new Python file and import the following libraries.

import os
import time
from datetime import datetime

from azure.ai.anomalydetector import AnomalyDetectorClient


from azure.ai.anomalydetector.models import DetectionRequest, ModelInfo
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError

Create variables for your key as an environment variable, the path to a time series data file, and the Azure
location of your subscription.

subscription_key = "ANOMALY_DETECTOR_KEY"
anomaly_detector_endpoint = "ANOMALY_DETECTOR_ENDPOINT"

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Python:
Authenticate the client
Train the model
Detect anomalies
Export model
Delete model

Authenticate the client


To instantiate a new Anomaly Detector client you need to pass the Anomaly Detector subscription key and
associated endpoint. We'll also establish a datasource.
To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:

T IM ESTA M P VA L UE

2019-04-01T00:00:00Z 5
T IM ESTA M P VA L UE

2019-04-01T00:01:00Z 3.6

2019-04-01T00:02:00Z 4

... ...

Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.

def __init__(self, subscription_key, anomaly_detector_endpoint, data_source=None):


self.sub_key = subscription_key
self.end_point = anomaly_detector_endpoint

# Create an Anomaly Detector client

# <client>
self.ad_client = AnomalyDetectorClient(AzureKeyCredential(self.sub_key), self.end_point)
# </client>

if not data_source:
# Datafeed for test only
self.data_source = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS"
else:
self.data_source = data_source

Train the model


We'll first train the model, check the model's status while training to determine when training is complete, and
then retrieve the latest model ID which we will need when we move to the detection phase.
def train(self, start_time, end_time, max_tryout=500):

# Number of models available now


model_list = list(self.ad_client.list_multivariate_model(skip=0, top=10000))
print("{:d} available models before training.".format(len(model_list)))

# Use sample data to train the model


print("Training new model...")
data_feed = ModelInfo(start_time=start_time, end_time=end_time, source=self.data_source)
response_header = \
self.ad_client.train_multivariate_model(data_feed, cls=lambda *args: [args[i] for i in
range(len(args))])[-1]
trained_model_id = response_header['Location'].split("/")[-1]

# Model list after training


new_model_list = list(self.ad_client.list_multivariate_model(skip=0, top=10000))

# Wait until the model is ready. It usually takes several minutes


model_status = None
tryout_count = 0
while (tryout_count < max_tryout and model_status != "READY"):
model_status = self.ad_client.get_multivariate_model(trained_model_id).model_info.status
tryout_count += 1
time.sleep(2)

assert model_status == "READY"

print("Done.", "\n--------------------")
print("{:d} available models after training.".format(len(new_model_list)))

# Return the latest model id


return trained_model_id

Detect anomalies
Use the detect_anomaly and get_dectection_result to determine if there are any anomalies within your
datasource. You will need to pass the model ID for the model that you just trained.
def detect(self, model_id, start_time, end_time, max_tryout=500):

# Detect anomaly in the same data source (but a different interval)


try:
detection_req = DetectionRequest(source=self.data_source, start_time=start_time, end_time=end_time)
response_header = self.ad_client.detect_anomaly(model_id, detection_req,
cls=lambda *args: [args[i] for i in
range(len(args))])[-1]
result_id = response_header['Location'].split("/")[-1]

# Get results (may need a few seconds)


r = self.ad_client.get_detection_result(result_id)
tryout_count = 0
while r.summary.status != "READY" and tryout_count < max_tryout:
time.sleep(1)
r = self.ad_client.get_detection_result(result_id)
tryout_count += 1

if r.summary.status != "READY":
print("Request timeout after %d tryouts.".format(max_tryout))
return None

except HttpResponseError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
raise e

return r

Export model
If you want to export a model use export_model and pass the model ID of the model you want to export:

def export_model(self, model_id, model_path="model.zip"):

# Export the model


model_stream_generator = self.ad_client.export_model(model_id)
with open(model_path, "wb") as f_obj:
while True:
try:
f_obj.write(next(model_stream_generator))
except StopIteration:
break
except Exception as e:
raise e

Delete model
To delete a model use delete_multivariate_model and pass the model ID of the model you want to delete:

def delete_model(self, model_id):

# Delete the mdoel


self.ad_client.delete_multivariate_model(model_id)
model_list_after_delete = list(self.ad_client.list_multivariate_model(skip=0, top=10000))
print("{:d} available models after deletion.".format(len(model_list_after_delete)))

Run the application


Before you run the application we need to add some code to call our newly created functions.
if __name__ == '__main__':
subscription_key = "ANOMALY_DETECTOR_KEY"
anomaly_detector_endpoint = "ANOMALY_DETECTOR_ENDPOINT"

# Create a new sample and client


sample = MultivariateSample(subscription_key, anomaly_detector_endpoint, data_source=None)

# Train a new model


model_id = sample.train(datetime(2021, 1, 1, 0, 0, 0), datetime(2021, 1, 2, 12, 0, 0))

# Reference
result = sample.detect(model_id, datetime(2021, 1, 2, 12, 0, 0), datetime(2021, 1, 3, 0, 0, 0))
print("Result ID:\t", result.result_id)
print("Result summary:\t", result.summary)
print("Result length:\t", len(result.results))

# Export model
sample.export_model(model_id, "model.zip")

# Delete model
sample.delete_model(model_id)

Before running it can be helpful to check your project against the full sample code that this quickstart is derived
from.
We also have an in-depth Jupyter Notebook to help you get started.
Run the application with the python command and your file name.

Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI

Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector multivariate client library for Java. Follow these steps to install the
package start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for Java to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (Maven) | Sample code

Prerequisites
Azure subscription - Create one for free
The current version of the Java Development Kit(JDK)
The Gradle build tool, or another dependency manager.
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create a new Gradle project
This quickstart uses the Gradle dependency manager. You can find more client library information on the Maven
Central Repository.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the gradle init command from your working directory. This command will create essential build files for
Gradle, including build.gradle.kts which is used at runtime to create and configure your application.

gradle init --type basic

When prompted to choose a DSL , select Kotlin .


Install the client library
Locate build.gradle.kts and open it with your preferred IDE or text editor. Then copy in this build configuration.
Be sure to include the project dependencies.

dependencies {
compile("com.azure:azure-ai-anomalydetector")
}

Create a Java file


Create a folder for your sample app. From your working directory, run the following command:

mkdir -p src/main/java

Navigate to the new folder and create a file called MetricsAdvisorQuickstarts.java. Open it in your preferred
editor or IDE and add the following import statements:
package com.azure.ai.anomalydetector;

import com.azure.ai.anomalydetector.models.*;
import com.azure.core.credential.AzureKeyCredential;
import com.azure.core.http.*;
import com.azure.core.http.policy.*;
import com.azure.core.http.rest.PagedIterable;
import com.azure.core.http.rest.PagedResponse;
import com.azure.core.http.rest.Response;
import com.azure.core.http.rest.StreamResponse;
import com.azure.core.util.Context;
import reactor.core.publisher.Flux;

import java.io.FileOutputStream;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.nio.ByteBuffer;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.time.*;
import java.time.format.DateTimeFormatter;
import java.util.Iterator;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;

Create variables your resource's Azure endpoint and key. Create another variable for the example data file.

String key = "YOUR_API_KEY";


String endpoint = "YOUR_ENDPOINT";

To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:

T IM ESTA M P VA L UE

2019-04-01T00:00:00Z 5

2019-04-01T00:01:00Z 3.6

2019-04-01T00:02:00Z 4

... ...

Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Train a model
Detect anomalies
Export model
Delete model

Authenticate the client


Instantiate a anomalyDetectorClient object with your endpoint and credentials.

HttpHeaders headers = new HttpHeaders()


.put("Accept", ContentType.APPLICATION_JSON);

HttpPipelinePolicy authPolicy = new AzureKeyCredentialPolicy(key,


new AzureKeyCredential(key));
AddHeadersPolicy addHeadersPolicy = new AddHeadersPolicy(headers);

HttpPipeline httpPipeline = new HttpPipelineBuilder().httpClient(HttpClient.createDefault())


.policies(authPolicy, addHeadersPolicy).build();
// Instantiate a client that will be used to call the service.
HttpLogOptions httpLogOptions = new HttpLogOptions();
httpLogOptions.setLogLevel(HttpLogDetailLevel.BODY_AND_HEADERS);

AnomalyDetectorClient anomalyDetectorClient = new AnomalyDetectorClientBuilder()


.pipeline(httpPipeline)
.endpoint(endpoint)
.httpLogOptions(httpLogOptions)
.buildClient();

Train a model
Construct a model result and train model
First we need to construct a model request. Make sure that start and end time align with your data source.
To use the Anomaly Detector multivariate APIs, we need to train our own model before using detection. Data
used for training is a batch of time series, each time series should be in a CSV file with only two columns,
"timestamp" and "value" (the column names should be exactly the same). Each CSV file should be named after
each variable for the time series. All of the time series should be zipped into one zip file and be uploaded to
Azure Blob storage, and there is no requirement for the zip file name. Alternatively, an extra meta.json file can be
included in the zip file if you wish the name of the variable to be different from the .zip file name. Once we
generate blob SAS (Shared access signatures) URL, we can use the url to the zip file for training.
Path path = Paths.get("test-data.csv");
List<String> requestData = Files.readAllLines(path);
List<TimeSeriesPoint> series = requestData.stream()
.map(line -> line.trim())
.filter(line -> line.length() > 0)
.map(line -> line.split(",", 2))
.filter(splits -> splits.length == 2)
.map(splits -> {
TimeSeriesPoint timeSeriesPoint = new TimeSeriesPoint();
timeSeriesPoint.setTimestamp(OffsetDateTime.parse(splits[0]));
timeSeriesPoint.setValue(Float.parseFloat(splits[1]));
return timeSeriesPoint;
})
.collect(Collectors.toList());

Integer window = 28;


AlignMode alignMode = AlignMode.OUTER;
FillNAMethod fillNAMethod = FillNAMethod.LINEAR;
Integer paddingValue = 0;
AlignPolicy alignPolicy = new
AlignPolicy().setAlignMode(alignMode).setFillNAMethod(fillNAMethod).setPaddingValue(paddingValue);
String source = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";
OffsetDateTime startTime = OffsetDateTime.of(2021, 1, 2, 0, 0, 0, 0, ZoneOffset.UTC);
;
OffsetDateTime endTime = OffsetDateTime.of(2021, 1, 3, 0, 0, 0, 0, ZoneOffset.UTC);
;
String displayName = "Devops-MultiAD";

ModelInfo request = new


ModelInfo().setSlidingWindow(window).setAlignPolicy(alignPolicy).setSource(source).setStartTime(startTime).s
etEndTime(endTime).setDisplayName(displayName);
TrainMultivariateModelResponse trainMultivariateModelResponse =
anomalyDetectorClient.trainMultivariateModelWithResponse(request, Context.NONE);
String header = trainMultivariateModelResponse.getDeserializedHeaders().getLocation();
String[] model_ids = header.split("/");
UUID model_id = UUID.fromString(model_ids[model_ids.length - 1]);
System.out.println(model_id);

Integer skip = 0;
Integer top = 5;
PagedIterable<ModelSnapshot> response = anomalyDetectorClient.listMultivariateModel(skip, top);
Iterator<PagedResponse<ModelSnapshot>> ite = response.iterableByPage().iterator();

while (true) {
Response<Model> response_model = anomalyDetectorClient.getMultivariateModelWithResponse(model_id,
Context.NONE);
UUID model = response_model.getValue().getModelId();
System.out.println(response_model.getStatusCode());
System.out.println(response_model.getValue().getModelInfo().getStatus());
System.out.println(model);
if (response_model.getValue().getModelInfo().getStatus() == ModelStatus.READY) {
break;
}
}

Detect anomalies
DetectionRequest detectionRequest = new
DetectionRequest().setSource(source).setStartTime(startTime).setEndTime(endTime);
DetectAnomalyResponse detectAnomalyResponse = anomalyDetectorClient.detectAnomalyWithResponse(model_id,
detectionRequest, Context.NONE);
String result = detectAnomalyResponse.getDeserializedHeaders().getLocation();

String[] result_list = result.split("/");


UUID result_id = UUID.fromString(result_list[result_list.length - 1]);

while (true) {
DetectionResult response_result = anomalyDetectorClient.getDetectionResult(result_id);
if (response_result.getSummary().getStatus() == DetectionStatus.READY) {
break;
}
else if(response_result.getSummary().getStatus() == DetectionStatus.FAILED){

}
}

Export model
To export your trained model use the exportModelWithResponse .

StreamResponse response_export = anomalyDetectorClient.exportModelWithResponse(model_id, Context.NONE);


Flux<ByteBuffer> value = response_export.getValue();
FileOutputStream bw = new FileOutputStream("result.zip");
value.subscribe(s -> write(bw, s), (e) -> close(bw), () -> close(bw));

Delete model
To delete an existing model that is available to the current resource use the deleteMultivariateModelWithResponse
function.

Response<Void> deleteMultivariateModelWithResponse =
anomalyDetectorClient.deleteMultivariateModelWithResponse(model_id, Context.NONE);

Run the application


You can build the app with:

gradle build

Run the application


Before running it can be helpful to check your code against the full sample code.
Run the application with the run goal:

gradle run

Next steps
Anomaly Detector multivariate best practices
Quickstart: Use the Anomaly Detector client library
3/5/2021 • 23 minutes to read • Edit Online

Get started with the Anomaly Detector client library for C#. Follow these steps to install the package start using
the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in your
time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or data
volume.
Use the Anomaly Detector client library for C# to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (NuGet) | Find the code on GitHub

Prerequisites
Azure subscription - Create one for free
The current version of .NET Core
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create an environment variable

NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.

Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com

Use the instructions for your operating system.


Windows
Linux
macOS
setx ANOMALY_DETECTOR_KEY <replace-with-your-anomaly-detector-key>
setx ANOMALY_DETECTOR_ENDPOINT <replace-with-your-anomaly-detector-endpoint>

After you add the environment variable, restart the console window.
Create a new .NET Core application
In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console
app with the name anomaly-detector-quickstart . This command creates a simple "Hello World" project with a
single C# source file: Program.cs.

dotnet new console -n anomaly-detector-quickstart

Change your directory to the newly created app folder. You can build the application with:

dotnet build

The build output should contain no warnings or errors.

...
Build succeeded.
0 Warning(s)
0 Error(s)
...

Install the client library


Within the application directory, install the Anomaly Detector client library for .NET with the following
command:

dotnet add package Microsoft.Azure.CognitiveServices.AnomalyDetector

From the project directory, open the program.cs file and add the following using directives :

using System;
using System.IO;
using System.Text;
using System.Linq;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.AnomalyDetector;
using Microsoft.Azure.CognitiveServices.AnomalyDetector.Models;

In the application's main() method, create variables for your resource's Azure location, and your key as an
environment variable. If you created the environment variable after application is launched, the editor, IDE, or
shell running it will need to be closed and reloaded to access the variable.
static void Main(string[] args){
//This sample assumes you have created an environment variable for your key and endpoint
string endpoint = Environment.GetEnvironmentVariable("ANOMALY_DETECTOR_ENDPOINT");
string key = Environment.GetEnvironmentVariable("ANOMALY_DETECTOR_KEY");
string datapath = "request-data.csv";

IAnomalyDetectorClient client = createClient(endpoint, key); //Anomaly Detector client

Request request = GetSeriesFromFile(datapath); // The request payload with points from the data file

EntireDetectSampleAsync(client, request).Wait(); // Async method for batch anomaly detection


LastDetectSampleAsync(client, request).Wait(); // Async method for analyzing the latest data point in
the set
DetectChangePoint(client, request).Wait(); // Async method for change point detection

Console.WriteLine("\nPress ENTER to exit.");


Console.ReadLine();
}

Object model
The Anomaly Detector client is a AnomalyDetectorClient object that authenticates to Azure using
ApiKeyServiceClientCredentials, which contains your key. The client can do anomaly detection on an entire
dataset using EntireDetectAsync(), or on the latest data point using LastDetectAsync(). The
ChangePointDetectAsync method detects points that mark changes in a trend.
Time series data is sent as a series of Points in a Request object. The Request object contains properties to
describe the data (Granularity for example), and parameters for the anomaly detection.
The Anomaly Detector response is either an EntireDetectResponse, LastDetectResponse, or
changePointDetectResponse object, depending on the method used.

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for .NET:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set

Authenticate the client


In a new method, instantiate a client with your endpoint and key. Create an ApiKeyServiceClientCredentials
object with your key, and use it with your endpoint to create an AnomalyDetectorClient object.

static IAnomalyDetectorClient createClient(string endpoint, string key)


{
IAnomalyDetectorClient client = new AnomalyDetectorClient(new ApiKeyServiceClientCredentials(key))
{
Endpoint = endpoint
};
return client;
}
Load time series data from a file
Download the example data for this quickstart from GitHub:
1. In your browser, right-click Raw .
2. Click Save link as .
3. Save the file to your application directory, as a .csv file.
This time series data is formatted as a .csv file, and will be sent to the Anomaly Detector API.
Create a new method to read in the time series data and add it to a Request object. Call File.ReadAllLines()
with the file path and create a list of Point objects, and strip any new line characters. Extract the values and
separate the datestamp from its numerical value, and add them to a new Point object.
Make a Request object with the series of points, and Granularity.Daily for the Granularity (or periodicity) of
the data points.

static Request GetSeriesFromFile(string path)


{
List<Point> list = File.ReadAllLines(path, Encoding.UTF8)
.Where(e => e.Trim().Length != 0)
.Select(e => e.Split(','))
.Where(e => e.Length == 2)
.Select(e => new Point(DateTime.Parse(e[0]), Double.Parse(e[1]))).ToList();

return new Request(list, Granularity.Daily);


}

Detect anomalies in the entire data set


Create a method to call the client's EntireDetectAsync() method with the Request object and await the response
as an EntireDetectResponse object. If the time series contains any anomalies, iterate through the response's
IsAnomaly values and print any that are true . These values correspond to the index of anomalous data points,
if any were found.

static async Task EntireDetectSampleAsync(IAnomalyDetectorClient client, Request request)


{
Console.WriteLine("Detecting anomalies in the entire time series.");

EntireDetectResponse result = await client.EntireDetectAsync(request).ConfigureAwait(false);

if (result.IsAnomaly.Contains(true))
{
Console.WriteLine("An anomaly was detected at index:");
for (int i = 0; i < request.Series.Count; ++i)
{
if (result.IsAnomaly[i])
{
Console.Write(i);
Console.Write(" ");
}
}
Console.WriteLine();
}
else
{
Console.WriteLine(" No anomalies detected in the series.");
}
}
Detect the anomaly status of the latest data point
Create a method to call the client's LastDetectAsync() method with the Request object and await the response
as a LastDetectResponse object. Check the response's IsAnomaly attribute to determine if the latest data point
sent was an anomaly or not.

static async Task LastDetectSampleAsync(IAnomalyDetectorClient client, Request request)


{

Console.WriteLine("Detecting the anomaly status of the latest point in the series.");


LastDetectResponse result = await client.LastDetectAsync(request).ConfigureAwait(false);

if (result.IsAnomaly)
{
Console.WriteLine("The latest point was detected as an anomaly.");
}
else
{
Console.WriteLine("The latest point was not detected as an anomaly.");
}
}

Detect change points in the data set


Create a method to call the client's DetectChangePointAsync method with the Request object and await the
response as a ChangePointDetectResponse object. Check the response's IsChangePoint values and print any that
are true . These values correspond to trend change points, if any were found.

public async Task DetectChangePoint(IAnomalyDetectorClient client, Request request)


{
Console.WriteLine("Detecting the change points in the series.");

ChangePointDetectResponse result = await client.DetectChangePointAsync(request).ConfigureAwait(false);

if (result.IsChangePoint.Contains(true))
{
Console.WriteLine("A change point was detected at index:");
for (int i = 0; i < request.Series.Count; ++i)
{
if (result.IsChangePoint[i])
{
Console.Write(i);
Console.Write(" ");
}
}
Console.WriteLine();
}
else
{
Console.WriteLine("No change point detected in the series.");
}
}

Run the application


Run the application with the dotnet run command from your application directory.

dotnet run
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI

Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector client library for JavaScript. Follow these steps to install the package start
using the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in
your time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or
data volume.
Use the Anomaly Detector client library for JavaScript to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (npm) | Find the code on GitHub

Prerequisites
Azure subscription - Create one for free
The current version of Node.js
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create an environment variable

NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.

Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com

Use the instructions for your operating system.


Windows
Linux
macOS

setx ANOMALY_DETECTOR_KEY <replace-with-your-anomaly-detector-key>


setx ANOMALY_DETECTOR_ENDPOINT <replace-with-your-anomaly-detector-endpoint>

After you add the environment variable, restart the console window.
Create a new Node.js application
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.

mkdir myapp && cd myapp

Run the npm init command to create a node application with a package.json file.

npm init

Create a file named index.js and import the following libraries:

'use strict'

const fs = require('fs');
const parse = require("csv-parse/lib/sync");
const { AnomalyDetectorClient } = require('@azure/ai-anomaly-detector');
const { AzureKeyCredential } = require('@azure/core-auth');

Create variables your resource's Azure endpoint and key. If you created the environment variable after you
launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the
variable. Create another variable for the example data file you will download in a later step, and an empty list for
the data points. Then create a ApiKeyCredentials object to contain the key.

// Spreadsheet with 2 columns and n rows.


let CSV_FILE = './request-data.csv';

// Authentication variables
// Add your Anomaly Detector subscription key and endpoint to your environment variables.
let key = process.env['ANOMALY_DETECTOR_KEY'];
let endpoint = process.env['ANOMALY_DETECTOR_ENDPOINT'];

// Points array for the request body


let points = [];

Install the client library


Install the ms-rest-azure and azure-cognitiveservices-anomalydetector NPM packages. The csv-parse library is
also used in this quickstart:
npm install @azure/ai-anomaly-detector @azure/ms-rest-js csv-parse

Your app's package.json file will be updated with the dependencies.

Object model
The Anomaly Detector client is an AnomalyDetectorClient object that authenticates to Azure using your key. The
client can do anomaly detection on an entire dataset using entireDetect(), or on the latest data point using
LastDetect(). The ChangePointDetectAsync method detects points that mark changes in a trend.
Time series data is sent as series of Points in a Request object. The Request object contains properties to
describe the data (Granularity for example), and parameters for the anomaly detection.
The Anomaly Detector response is a LastDetectResponse, EntireDetectResponse, or ChangePointDetectResponse
object depending on the method used.

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set

Authenticate the client


Instantiate a AnomalyDetectorClient object with your endpoint and credentials.

let anomalyDetectorClient = new AnomalyDetectorClient(endpoint, new AzureKeyCredential(key));

Load time series data from a file


Download the example data for this quickstart from GitHub:
1. In your browser, right-click Raw .
2. Click Save link as .
3. Save the file to your application directory, as a .csv file.
This time series data is formatted as a .csv file, and will be sent to the Anomaly Detector API.
Read your data file with the csv-parse library's readFileSync() method, and parse the file with parse() . For
each line, push a Point object containing the timestamp, and the numeric value.

function readFile() {
let input = fs.readFileSync(CSV_FILE).toString();
let parsed = parse(input, { skip_empty_lines: true });
parsed.forEach(function (e) {
points.push({ timestamp: new Date(e[0]), value: parseFloat(e[1]) });
});
}
readFile()
Detect anomalies in the entire data set
Call the API to detect anomalies through the entire time series as a batch with the client's entireDetect() method.
Store the returned EntireDetectResponse object. Iterate through the response's isAnomaly list, and print the
index of any true values. These values correspond to the index of anomalous data points, if any were found.

async function batchCall() {


// Create request body for API call
let body = { series: points, granularity: 'daily' }
// Make the call to detect anomalies in whole series of points
await anomalyDetectorClient.detectEntireSeries(body)
.then((response) => {
console.log("Batch (entire) anomaly detection):")
for (let item = 0; item < response.isAnomaly.length; item++) {
if (response.isAnomaly[item]) {
console.log("An anomaly was detected from the series, at row " + item)
}
}
}).catch((error) => {
console.log(error)
})

}
batchCall()

Detect the anomaly status of the latest data point


Call the Anomaly Detector API to determine if your latest data point is an anomaly using the client's lastDetect()
method, and store the returned LastDetectResponse object. The response's isAnomaly value is a boolean that
specifies that point's anomaly status.

async function lastDetection() {

let body = { series: points, granularity: 'daily' }


// Make the call to detect anomalies in the latest point of a series
await anomalyDetectorClient.detectLastPoint(body)
.then((response) => {
console.log("Latest point anomaly detection:")
if (response.isAnomaly) {
console.log("The latest point, in row " + points.length + ", is detected as an anomaly.")
} else {
console.log("The latest point, in row " + points.length + ", is not detected as an
anomaly.")
}
}).catch((error) => {
console.log(error)
})
}
lastDetection()

Detect change points in the data set


Call the API to detect change points in the time series with the client's detectChangePoint() method. Store the
returned ChangePointDetectResponse object. Iterate through the response's isChangePoint list, and print the
index of any true values. These values correspond to the indices of trend change points, if any were found.
async function changePointDetection() {

let body = { series: points, granularity: 'daily' }


// get change point detect results
await anomalyDetectorClient.detectChangePoint(body)
.then((response) => {
if (
response.isChangePoint.some(function (changePoint) {
return changePoint === true;
})
) {
console.log("Change points were detected from the series at index:");
response.isChangePoint.forEach(function (changePoint, index) {
if (changePoint === true) {
console.log(index);
}
});
} else {
console.log("There is no change point detected from the series.");
}
}).catch((error) => {
console.log(error)
})
}

Run the application


Run the application with the node command on your quickstart file.

node index.js

Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI

Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector client library for Python. Follow these steps to install the package start
using the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in
your time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or
data volume.
Use the Anomaly Detector client library for Python to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (PyPi) | Find the sample code on GitHub

Prerequisites
Python 3.x
The Pandas data analysis library
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Setting up
Create an environment variable

NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.

Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com

Use the instructions for your operating system.


Windows
Linux
macOS

setx ANOMALY_DETECTOR_KEY <replace-with-your-anomaly-detector-key>


setx ANOMALY_DETECTOR_ENDPOINT <replace-with-your-anomaly-detector-endpoint>

After you add the environment variable, restart the console window.
Create a new python application
Create a new Python file and import the following libraries.
import os
from azure.ai.anomalydetector import AnomalyDetectorClient
from azure.ai.anomalydetector.models import DetectRequest, TimeSeriesPoint, TimeGranularity, \
AnomalyDetectorError
from azure.core.credentials import AzureKeyCredential
import pandas as pd

Create variables for your key as an environment variable, the path to a time series data file, and the Azure
location of your subscription. For example, westus2 .

SUBSCRIPTION_KEY = os.environ["ANOMALY_DETECTOR_KEY"]
ANOMALY_DETECTOR_ENDPOINT = os.environ["ANOMALY_DETECTOR_ENDPOINT"]
TIME_SERIES_DATA_PATH = os.path.join("./sample_data", "request-data.csv")

Install the client library


After installing Python, you can install the client library with:

pip install --upgrade azure-ai-anomalydetector

Object model
The Anomaly Detector client is a AnomalyDetectorClient object that authenticates to Azure using your key. The
client can do anomaly detection an entire dataset using detect_entire_series, or on the latest data point using
detect_last_point. The detect_change_point function detects points that mark changes in a trend.
Time series data is sent as a series of TimeSeriesPoints object. The DetectRequest object contains properties to
describe the data TimeGranularity for example, and parameters for the anomaly detection.
The Anomaly Detector response is a LastDetectResponse, EntireDetectResponse, or ChangePointDetectResponse
object depending on the method used.

Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Python:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set

Authenticate the client


Add your Azure location variable to the endpoint, and authenticate the client with your key.

client = AnomalyDetectorClient(AzureKeyCredential(SUBSCRIPTION_KEY), ANOMALY_DETECTOR_ENDPOINT)

Load time series data from a file


Download the example data for this quickstart from GitHub:
1. In your browser, right-click Raw .
2. Click Save link as .
3. Save the file to your application directory, as a .csv file.
This time series data is formatted as a .csv file, and will be sent to the Anomaly Detector API.
Load your data file with the Pandas library's read_csv() method, and make an empty list variable to store your
data series. Iterate through the file, and append the data as a TimeSeriesPoint object. This object will contain the
timestamp and numerical value from the rows of your .csv data file.

series = []
data_file = pd.read_csv(TIME_SERIES_DATA_PATH, header=None, encoding='utf-8', parse_dates=[0])
for index, row in data_file.iterrows():
series.append(TimeSeriesPoint(timestamp=row[0], value=row[1]))

Create a DetectRequest object with your time series, and the TimeGranularity (or periodicity) of its data points.
For example, TimeGranularity.daily .

request = DetectRequest(series=series, granularity=TimeGranularity.daily)

Detect anomalies in the entire data set


Call the API to detect anomalies through the entire time series data using the client's detect_entire_series
method. Store the returned EntireDetectResponse object. Iterate through the response's is_anomaly list, and
print the index of any true values. These values correspond to the index of anomalous data points, if any were
found.

print('Detecting anomalies in the entire time series.')

try:
response = client.detect_entire_series(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)

if any(response.is_anomaly):
print('An anomaly was detected at index:')
for i, value in enumerate(response.is_anomaly):
if value:
print(i)
else:
print('No anomalies were detected in the time series.')

Detect the anomaly status of the latest data point


Call the Anomaly Detector API to determine if your latest data point is an anomaly using the client's
detect_last_point method, and store the returned LastDetectResponse object. The response's is_anomaly value
is a boolean that specifies that point's anomaly status.
print('Detecting the anomaly status of the latest data point.')

try:
response = client.detect_last_point(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)

if response.is_anomaly:
print('The latest point is detected as anomaly.')
else:
print('The latest point is not detected as anomaly.')

Detect change points in the data set


Call the API to detect change points in the time series data using the client's detect_change_point method. Store
the returned ChangePointDetectResponse object. Iterate through the response's is_change_point list, and print
the index of any true values. These values correspond to the indices of trend change points, if any were found.

print('Detecting change points in the entire time series.')

try:
response = client.detect_change_point(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)

if any(response.is_change_point):
print('An change point was detected at index:')
for i, value in enumerate(response.is_change_point):
if value:
print(i)
else:
print('No change point were detected in the time series.')

Run the application


Run the application with the python command and your file name.

Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI

Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
In this quickstart, you learn how to detect anomalies in a batch of time series data using the Anomaly Detector
service and cURL.
For a high-level look at Anomaly Detector concepts, see the overview article.

Prerequisites
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and select the Go to resource button.
You will need the key and endpoint address from the resource you create to use the REST API. You can
use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.

Detect anomalies for an entire series


At a command prompt, run the following command. You will need to insert the following values into the
command.
Your Anomaly detector service subscription key.
Your Anomaly detector endpoint address.
A valid JSON file of time series data to test for anomalies. If you don't have your own file, you can create a
sample.json file from the Request body sample.

curl -v -X POST
"https://{endpointresourcename.cognitive.microsoft.com}/anomalydetector/v1.0/timeseries/entire/detect"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
-d "@{path_to_file.json}"

For an example with all values populated:

curl -v -X POST "https://my-resource-


name.cognitiveservices.azure.com/anomalydetector/v1.0/timeseries/entire/detect" -H "Content-Type:
application/json" -H "Ocp-Apim-Subscription-Key:1111112222222ed333333ab333333333" -d "@test.json"

If you used the sample data from the pre-requisites, you should receive a response 200 with the following
results:

{
"expectedValues": [
827.7940908243968,
798.9133774671927,
888.6058431807189,
900.5606407986661,
962.8389426378304,
933.2591606306954,
891.0784104799666,
856.1781601363697,
809.8987227908941,
807.375129007505,
764.3196682448518,
803.933498594564,
823.5900620883058,
794.0905641334288,
883.164245249282,
883.164245249282,
894.8419000690953,
956.8430591101258,
927.6285055190114,
885.812983784303,
851.6424797402517,
806.0927886943216,
804.6826815312029,
762.74070738882,
804.0251702513732,
825.3523662579559,
798.0404188724976,
889.3016505577698,
902.4226124345937,
965.867078532635,
937.3200495736695,
896.1720524711102,
862.0087368413656,
816.4662342097423,
814.4297745524709,
771.8614479159354,
811.859271346729,
831.8998279215521,
802.947544797165,
892.5684407435083,
904.5488214533809,
966.8527063844707,
937.3168391003043,
895.180003672544,
860.3649596356635,
814.1707285969043,
811.9054862686213,
769.1083769610742,
809.2328084659704
],
"upperMargins": [
41.389704541219835,
39.94566887335964,
44.43029215903594,
45.02803203993331,
48.14194713189152,
46.66295803153477,
44.55392052399833,
42.808908006818484,
40.494936139544706,
40.36875645037525,
38.215983412242586,
40.196674929728196,
41.17950310441529,
39.70452820667144,
44.1582122624641,
44.74209500345477,
47.84215295550629,
46.38142527595057,
44.290649189215145,
42.58212398701258,
40.30463943471608,
40.234134076560146,
38.137035369441,
40.201258512568664,
41.267618312897795,
39.90202094362488,
44.46508252788849,
45.121130621729684,
48.29335392663175,
46.86600247868348,
44.80860262355551,
43.100436842068284,
40.82331171048711,
40.721488727623544,
40.721488727623544,
38.593072395796774,
40.59296356733645,
41.5949913960776,
40.14737723985825,
44.62842203717541,
45.227441072669045,
48.34263531922354,
46.86584195501521,
44.759000183627194,
43.01824798178317,
40.70853642984521,
40.59527431343106,
38.45541884805371,
40.46164042329852
],
"lowerMargins": [
41.389704541219835,
39.94566887335964,
44.43029215903594,
45.02803203993331,
48.14194713189152,
46.66295803153477,
44.55392052399833,
42.808908006818484,
40.494936139544706,
40.36875645037525,
38.215983412242586,
40.196674929728196,
41.17950310441529,
39.70452820667144,
44.1582122624641,
44.74209500345477,
47.84215295550629,
46.38142527595057,
44.290649189215145,
42.58212398701258,
40.30463943471608,
40.234134076560146,
38.137035369441,
40.201258512568664,
41.267618312897795,
39.90202094362488,
44.46508252788849,
45.121130621729684,
48.29335392663175,
46.86600247868348,
44.80860262355551,
43.100436842068284,
40.82331171048711,
40.721488727623544,
38.593072395796774,
40.59296356733645,
41.5949913960776,
40.14737723985825,
44.62842203717541,
45.227441072669045,
48.34263531922354,
46.86584195501521,
44.759000183627194,
43.01824798178317,
40.70853642984521,
40.59527431343106,
38.45541884805371,
40.46164042329852
],
"isAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"isPositiveAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"isNegativeAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"period": 12
}

For more information, see the Anomaly Detection REST reference.

Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI

Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
How to: Use the Anomaly Detector API on your
time series data
3/5/2021 • 2 minutes to read • Edit Online

The Anomaly Detector API provides two methods of anomaly detection. You can either detect anomalies as a
batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest
data point. The detection model returns anomaly results along with each data point's expected value, and the
upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal
values, and anomalies in the data.

Anomaly detection modes


The Anomaly Detector API provides detection modes: batch and streaming.

NOTE
The following request URLs must be combined with the appropriate endpoint for your subscription. For example:
https://<your-custom-
subdomain>.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/entire/detect

Batch detection
To detect anomalies throughout a batch of data points over a given time range, use the following request URI
with your time series data:
/timeseries/entire/detect .
By sending your time series data at once, the API will generate a model using the entire series, and analyze each
data point with it.
Streaming detection
To continuously detect anomalies on streaming data, use the following request URI with your latest data point:
/timeseries/last/detect' .
By sending new data points as you generate them, you can monitor your data in real time. A model will be
generated with the data points you send, and the API will determine if the latest point in the time series is an
anomaly.

Adjusting lower and upper anomaly detection boundaries


By default, the upper and lower boundaries for anomaly detection are calculated using expectedValue ,
upperMargin , and lowerMargin . If you require different boundaries, we recommend applying a marginScale to
upperMargin or lowerMargin . The boundaries would be calculated as follows:

B O UN DA RY C A L C UL AT IO N

upperBoundary expectedValue + (100 - marginScale) * upperMargin

lowerBoundary expectedValue - (100 - marginScale) * lowerMargin


The following examples show an Anomaly Detector API result at different sensitivities.
Example with sensitivity at 99

Example with sensitivity at 95


Example with sensitivity at 85

Next Steps
What is the Anomaly Detector API?
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
Deploy an Anomaly Detector module to IoT Edge
3/5/2021 • 3 minutes to read • Edit Online

Learn how to deploy the Cognitive Services Anomaly Detector module to an IoT Edge device. Once it's deployed
into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the
exact same APIs as an Anomaly Detector container instance running in a standard docker container
environment.

Prerequisites
Use an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
Install the Azure CLI.
An IoT Hub and an IoT Edge device.

Create an Anomaly Detector resource


1. Sign into the Azure portal.
2. Select Create Anomaly Detector resource.
3. Enter all required settings:

SET T IN G VA L UE

Name Desired name (2-64 characters)

Subscription Select appropriate subscription

Location Select any nearby and available location

Pricing Tier F0 - 10 Calls per second, 20K Transactions per month.


Or:
S0 - 80 Calls per second

Resource Group Select an available resource group

4. Click Create and wait for the resource to be created. After it is created, navigate to the resource page
5. Collect configured endpoint and an API key:

K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE

Over view Endpoint Copy the endpoint. It looks similar


to
https://<your-resource-
name>.cognitiveservices.azure.com/
K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE

Keys API Key Copy 1 of the two keys. It is a 32


alphanumeric-character string with
no spaces or dashes,
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
.

Deploy the Anomaly Detection module to the edge


1. In the Azure portal, enter Anomaly Detector on IoT Edge into the search and open the Azure
Marketplace result.
2. It will take you to the Azure portal's Target Devices for IoT Edge Module page. Provide the following
required information.
a. Select your subscription.
b. Select your IoT Hub.
c. Select Find device and find an IoT Edge device.
3. Select the Create button.
4. Select the AnomalyDetectoronIoTEdge module.

5. Navigate to Environment Variables and provide the following information.


a. Keep the value accept for Eula .
b. Fill out Billing with your Cognitive Services endpoint.
c. Fill out ApiKey with your Cognitive Services API key.
6. Select Update
7. Select Next: Routes to define your route. You define all messages from all modules to go to Azure IoT
Hub.
8. Select Next: Review + create . You can preview the JSON file that defines all the modules that get
deployed to your IoT Edge device.
9. Select Create to start the module deployment.
10. After you complete module deployment, you'll go back to the IoT Edge page of your IoT hub. Select your
device from the list of IoT Edge devices to see its details.
11. Scroll down and see the modules listed. Check that the runtime status is running for your new module.
To troubleshoot the runtime status of your IoT Edge device, consult the troubleshooting guide

Test Anomaly Detector on an IoT Edge device


You'll make an HTTP call to the Azure IoT Edge device that has the Azure Cognitive Services container running.
The container provides REST-based endpoint APIs. Use the host, http://<your-edge-device-ipaddress>:5000 , for
module APIs.
If your edge device does not already allow inbound communication on port 5000, you will need to create a new
inbound por t rule .
For an Azure VM, this can set under Vir tual Machine > Settings > Networking > Inbound por t rule >
Add inbound por t rule .
There are several ways to validate that the module is running. Locate the External IP address and exposed port
of the edge device in question, and open your favorite web browser. Use the various request URLs below to
validate the container is running. The example request URLs listed below are
http://<your-edge-device-ipaddress:5000 , but your specific container may vary. Keep in mind that you need to
use your edge device's External IP address.

REQ UEST URL P URP O SE

http://<your-edge-device-ipaddress>:5000/ The container provides a home page.


REQ UEST URL P URP O SE

http://<your-edge-device-ipaddress>:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.

http://<your-edge-device-ipaddress>:5000/swagger The container provides a full set of documentation for the


endpoints and a Tr y it out feature. With this feature, you
can enter your settings into a web-based HTML form and
make the query without having to write any code. After the
query returns, an example CURL command is provided to
demonstrate the HTTP headers and body format that's
required.

Next steps
Review Install and run containers for pulling the container image and run the container
Review Configure containers for configuration settings
Learn more about Anomaly Detector API service
Install and run Docker containers for the Anomaly
Detector API
3/5/2021 • 9 minutes to read • Edit Online

NOTE
The container image location has recently changed. Read this article to see the updated location for this container.

Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific
security and data governance requirements. In this article you'll learn how to download, install, and run an
Anomaly Detector container.
Anomaly Detector offers a single Docker container for using the API on-premises. Use the container to:
Use the Anomaly Detector's algorithms on your data
Monitor streaming data, and detect anomalies as they occur in real-time.
Detect anomalies throughout your data set as a batch.
Detect trend change points in your data set as a batch.
Adjust the anomaly detection algorithm's sensitivity to better fit your data.
For detailed information about the API, please see:
Learn more about Anomaly Detector API service
If you don't have an Azure subscription, create a free account before you begin.

Prerequisites
You must meet the following prerequisites before using Anomaly Detector containers:

REQ UIRED P URP O SE

Docker Engine You need the Docker Engine installed on a host computer.
Docker provides packages that configure the Docker
environment on macOS, Windows, and Linux. For a primer
on Docker and container basics, see the Docker overview.

Docker must be configured to allow the containers to


connect with and send billing data to Azure.

On Windows , Docker must also be configured to support


Linux containers.

Familiarity with Docker You should have a basic understanding of Docker concepts,
like registries, repositories, containers, and container images,
as well as knowledge of basic docker commands.
REQ UIRED P URP O SE

Anomaly Detector resource In order to use these containers, you must have:

An Azure Anomaly Detector resource to get the associated


API key and endpoint URI. Both values are available on the
Azure portal's Anomaly Detector Overview and Keys
pages and are required to start the container.

{API_KEY} : One of the two available resource keys on the


Keys page

{ENDPOINT_URI} : The endpoint as provided on the


Over view page

Gathering required parameters


There are three primary parameters for all Cognitive Services' containers that are required. The end-user license
agreement (EULA) must be present with a value of accept . Additionally, both an Endpoint URL and API Key are
needed.
Endpoint URI {ENDPOINT_URI}

The Endpoint URI value is available on the Azure portal Overview page of the corresponding Cognitive Service
resource. Navigate to the Overview page, hover over the Endpoint, and a Copy to clipboard icon will appear.
Copy and use where needed.

Keys {API_KEY}

This key is used to start the container, and is available on the Azure portal's Keys page of the corresponding
Cognitive Service resource. Navigate to the Keys page, and click on the Copy to clipboard icon.
IMPORTANT
These subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for
example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to
make an API call. When regenerating the first key, you can use the second key for continued access to the service.

The host computer


The host is a x64-based computer that runs the Docker container. It can be a computer on your premises or a
Docker hosting service in Azure, such as:
Azure Kubernetes Service.
Azure Container Instances.
A Kubernetes cluster deployed to Azure Stack. For more information, see Deploy Kubernetes to Azure Stack.
Container requirements and recommendations
The following table describes the minimum and recommended CPU cores and memory to allocate for Anomaly
Detector container.

Q P S( Q UERIES P ER SEC O N D) M IN IM UM REC O M M EN DED

10 QPS 4 core, 1-GB memory 8 core 2-GB memory

20 QPS 8 core, 2-GB memory 16 core 4-GB memory

Each core must be at least 2.6 gigahertz (GHz) or faster.


Core and memory correspond to the --cpus and --memory settings, which are used as part of the docker run
command.

Get the container image with docker pull


Use the docker pull command to download a container image.

C O N TA IN ER REP O SITO RY

cognitive-services-anomaly-detector mcr.microsoft.com/azure-cognitive-
services/decision/anomaly-detector:latest

TIP
You can use the docker images command to list your downloaded container images. For example, the following command
lists the ID, repository, and tag of each downloaded container image, formatted as a table:

docker images --format "table {{.ID}}\t{{.Repository}}\t{{.Tag}}"

IMAGE ID REPOSITORY TAG


<image-id> <repository-path/name> <tag-name>

Docker pull for the Anomaly Detector container

docker pull mcr.microsoft.com/azure-cognitive-services/anomaly-detector:latest


How to use the container
Once the container is on the host computer, use the following process to work with the container.
1. Run the container, with the required billing settings. More examples of the docker run command are
available.
2. Query the container's prediction endpoint.

Run the container with docker run


Use the docker run command to run the container. Refer to gathering required parameters for details on how to
get the {ENDPOINT_URI} and {API_KEY} values.
Examples of the docker run command are available.

docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \


mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector:latest \
Eula=accept \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY}

This command:
Runs an Anomaly Detector container from the container image
Allocates one CPU core and 4 gigabytes (GB) of memory
Exposes TCP port 5000 and allocates a pseudo-TTY for the container
Automatically removes the container after it exits. The container image is still available on the host computer.

IMPORTANT
The Eula , Billing , and ApiKey options must be specified to run the container; otherwise, the container won't start.
For more information, see Billing.

Running multiple containers on the same host


If you intend to run multiple containers with exposed ports, make sure to run each container with a different
port. For example, run the first container on port 5000 and the second container on port 5001.
Replace the <container-registry> and <container-name> with the values of the containers you use. These do not
have to be the same container. You can have the Anomaly Detector container and the LUIS container running on
the HOST together or you can have multiple Anomaly Detector containers running.
Run the first container on port 5000.

docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \


<container-registry>/microsoft/<container-name> \
Eula=accept \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY}

Run the second container on port 5001.


docker run --rm -it -p 5000:5001 --memory 4g --cpus 1 \
<container-registry>/microsoft/<container-name> \
Eula=accept \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY}

Each subsequent container should be on a different port.

Query the container's prediction endpoint


The container provides REST-based query prediction endpoint APIs.
Use the host, http://localhost:5000, for container APIs.

Validate that a container is running


There are several ways to validate that the container is running. Locate the External IP address and exposed port
of the container in question, and open your favorite web browser. Use the various request URLs below to
validate the container is running. The example request URLs listed below are http://localhost:5000 , but your
specific container may vary. Keep in mind that you're to rely on your container's External IP address and exposed
port.

REQ UEST URL P URP O SE

http://localhost:5000/ The container provides a home page.

http://localhost:5000/ready Requested with GET, this provides a verification that the


container is ready to accept a query against the model. This
request can be used for Kubernetes liveness and readiness
probes.

http://localhost:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.

http://localhost:5000/swagger The container provides a full set of documentation for the


endpoints and a Tr y it out feature. With this feature, you
can enter your settings into a web-based HTML form and
make the query without having to write any code. After the
query returns, an example CURL command is provided to
demonstrate the HTTP headers and body format that's
required.
Stop the container
To shut down the container, in the command-line environment where the container is running, select Ctrl+C.

Troubleshooting
If you run the container with an output mount and logging enabled, the container generates log files that are
helpful to troubleshoot issues that happen while starting or running the container.

TIP
For more troubleshooting information and guidance, see Cognitive Services containers frequently asked questions (FAQ).

Billing
The Anomaly Detector containers send billing information to Azure, using an Anomaly Detector resource on
your Azure account.
Queries to the container are billed at the pricing tier of the Azure resource that's used for the ApiKey .
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing
endpoint. You must enable the containers to communicate billing information with the billing endpoint at all
times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed,
to Microsoft.
Connect to Azure
The container needs the billing argument values to run. These values allow the container to connect to the
billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to
Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing
endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it
can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the Cognitive
Services container FAQ for an example of the information sent to Microsoft for billing.
Billing arguments
The docker run command will start the container when all three of the following options are provided with
valid values:
O P T IO N DESC RIP T IO N

ApiKey The API key of the Cognitive Services resource that's used to
track billing information.
The value of this option must be set to an API key for the
provisioned resource that's specified in Billing .

Billing The endpoint of the Cognitive Services resource that's used


to track billing information.
The value of this option must be set to the endpoint URI of
a provisioned Azure resource.

Eula Indicates that you accepted the license for the container.
The value of this option must be set to accept .

For more information about these options, see Configure containers.

Summary
In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector
containers. In summary:
Anomaly Detector provides one Linux container for Docker, encapsulating anomaly detection with batch vs
streaming, expected range inference, and sensitivity tuning.
Container images are downloaded from a private Azure Container Registry dedicated for containers.
Container images run in Docker.
You can use either the REST API or SDK to call operations in Anomaly Detector containers by specifying the
host URI of the container.
You must specify billing information when instantiating a container.

IMPORTANT
Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to
enable the containers to communicate billing information with the metering service at all times. Cognitive Services
containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.

Next steps
Review Configure containers for configuration settings
Deploy an Anomaly Detector container to Azure Container Instances
Learn more about Anomaly Detector API service
Configure Anomaly Detector containers
3/5/2021 • 8 minutes to read • Edit Online

The Anomaly Detector container runtime environment is configured using the docker run command
arguments. This container has several required settings, along with a few optional settings. Several examples of
the command are available. The container-specific settings are the billing settings.

Configuration settings
This container has the following configuration settings:

REQ UIRED SET T IN G P URP O SE

Yes ApiKey Used to track billing information.

No ApplicationInsights Allows you to add Azure Application


Insights telemetry support to your
container.

Yes Billing Specifies the endpoint URI of the


service resource on Azure.

Yes Eula Indicates that you've accepted the


license for the container.

No Fluentd Write log and, optionally, metric data


to a Fluentd server.

No Http Proxy Configure an HTTP proxy for making


outbound requests.

No Logging Provides ASP.NET Core logging


support for your container.

No Mounts Read and write data from host


computer to container and from
container back to host computer.

IMPORTANT
The ApiKey , Billing , and Eula settings are used together, and you must provide valid values for all three of them;
otherwise your container won't start. For more information about using these configuration settings to instantiate a
container, see Billing.

ApiKey configuration setting


The ApiKey setting specifies the Azure resource key used to track billing information for the container. You must
specify a value for the ApiKey and the value must be a valid key for the Anomaly Detector resource specified for
the Billing configuration setting.
This setting can be found in the following place:
Azure portal: Anomaly Detector's Resource Management, under Keys
ApplicationInsights setting
The ApplicationInsights setting allows you to add Azure Application Insights telemetry support to your
container. Application Insights provides in-depth monitoring of your container. You can easily monitor your
container for availability, performance, and usage. You can also quickly identify and diagnose errors in your
container.
The following table describes the configuration settings supported under the ApplicationInsights section.

REQ UIRED NAME DATA T Y P E DESC RIP T IO N

No InstrumentationKey String The instrumentation key of


the Application Insights
instance to which telemetry
data for the container is
sent. For more information,
see Application Insights for
ASP.NET Core.

Example:
InstrumentationKey=123456789

Billing configuration setting


The Billing setting specifies the endpoint URI of the Anomaly Detector resource on Azure used to meter billing
information for the container. You must specify a value for this configuration setting, and the value must be a
valid endpoint URI for an Anomaly Detector resource on Azure.
This setting can be found in the following place:
Azure portal: Anomaly Detector's Overview, labeled Endpoint

REQ UIRED NAME DATA T Y P E DESC RIP T IO N

Yes Billing String Billing endpoint URI. For


more information on
obtaining the billing URI,
see gathering required
parameters. For more
information and a complete
list of regional endpoints,
see Custom subdomain
names for Cognitive
Services.

Eula setting
The Eula setting indicates that you've accepted the license for the container. You must specify a value for this
configuration setting, and the value must be set to accept .

REQ UIRED NAME DATA T Y P E DESC RIP T IO N

Yes Eula String License acceptance

Example:
Eula=accept

Cognitive Services containers are licensed under your agreement governing your use of Azure. If you do not
have an existing agreement governing your use of Azure, you agree that your agreement governing use of
Azure is the Microsoft Online Subscription Agreement, which incorporates the Online Services Terms. For
previews, you also agree to the Supplemental Terms of Use for Microsoft Azure Previews. By using the container
you agree to these terms.

Fluentd settings
Fluentd is an open-source data collector for unified logging. The Fluentd settings manage the container's
connection to a Fluentd server. The container includes a Fluentd logging provider, which allows your container to
write logs and, optionally, metric data to a Fluentd server.
The following table describes the configuration settings supported under the Fluentd section.

NAME DATA T Y P E DESC RIP T IO N

Host String The IP address or DNS host name of


the Fluentd server.

Port Integer The port of the Fluentd server.


The default value is 24224.

HeartbeatMs Integer The heartbeat interval, in milliseconds.


If no event traffic has been sent before
this interval expires, a heartbeat is sent
to the Fluentd server. The default value
is 60000 milliseconds (1 minute).

SendBufferSize Integer The network buffer space, in bytes,


allocated for send operations. The
default value is 32768 bytes (32
kilobytes).

TlsConnectionEstablishmentTimeoutMs Integer The timeout, in milliseconds, to


establish a SSL/TLS connection with
the Fluentd server. The default value is
10000 milliseconds (10 seconds).
If UseTLS is set to false, this value is
ignored.

UseTLS Boolean Indicates whether the container should


use SSL/TLS for communicating with
the Fluentd server. The default value is
false.

Http proxy credentials settings


If you need to configure an HTTP proxy for making outbound requests, use these two arguments:

NAME DATA T Y P E DESC RIP T IO N

HTTP_PROXY string The proxy to use, for example,


http://proxy:8888
<proxy-url>

HTTP_PROXY_CREDS string Any credentials needed to authenticate


against the proxy, for example,
username:password . This value must
be in lower-case .

<proxy-user> string The user for the proxy.


NAME DATA T Y P E DESC RIP T IO N

<proxy-password> string The password associated with


<proxy-user> for the proxy.

docker run --rm -it -p 5000:5000 \


--memory 2g --cpus 1 \
--mount type=bind,src=/home/azureuser/output,target=/output \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
HTTP_PROXY=<proxy-url> \
HTTP_PROXY_CREDS=<proxy-user>:<proxy-password> \

Logging settings
The Logging settings manage ASP.NET Core logging support for your container. You can use the same
configuration settings and values for your container that you use for an ASP.NET Core application.
The following logging providers are supported by the container:

P RO VIDER P URP O SE

Console The ASP.NET Core Console logging provider. All of the


ASP.NET Core configuration settings and default values for
this logging provider are supported.

Debug The ASP.NET Core Debug logging provider. All of the


ASP.NET Core configuration settings and default values for
this logging provider are supported.

Disk The JSON logging provider. This logging provider writes log
data to the output mount.

This container command stores logging information in the JSON format to the output mount:

docker run --rm -it -p 5000:5000 \


--memory 2g --cpus 1 \
--mount type=bind,src=/home/azureuser/output,target=/output \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
Logging:Disk:Format=json

This container command shows debugging information, prefixed with dbug , while the container is running:

docker run --rm -it -p 5000:5000 \


--memory 2g --cpus 1 \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
Logging:Console:LogLevel:Default=Debug

Disk logging
The Disk logging provider supports the following configuration settings:

NAME DATA T Y P E DESC RIP T IO N

Format String The output format for log files.


Note: This value must be set to json
to enable the logging provider. If this
value is specified without also
specifying an output mount while
instantiating a container, an error
occurs.

MaxFileSize Integer The maximum size, in megabytes (MB),


of a log file. When the size of the
current log file meets or exceeds this
value, a new log file is started by the
logging provider. If -1 is specified, the
size of the log file is limited only by the
maximum file size, if any, for the
output mount. The default value is 1.

For more information about configuring ASP.NET Core logging support, see Settings file configuration.

Mount settings
Use bind mounts to read and write data to and from the container. You can specify an input mount or output
mount by specifying the --mount option in the docker run command.
The Anomaly Detector containers don't use input or output mounts to store training or service data.
The exact syntax of the host mount location varies depending on the host operating system. Additionally, the
host computer's mount location may not be accessible due to a conflict between permissions used by the
Docker service account and the host mount location permissions.

O P T IO N A L NAME DATA T Y P E DESC RIP T IO N

Not allowed Input String Anomaly Detector


containers do not use this.

Optional Output String The target of the output


mount. The default value is
/output . This is the
location of the logs. This
includes container logs.

Example:
--mount
type=bind,src=c:\output,target=/output

Example docker run commands


The following examples use the configuration settings to illustrate how to write and use docker run commands.
Once running, the container continues to run until you stop it.
Line-continuation character : The Docker commands in the following sections use the back slash, \ , as a
line continuation character for a bash shell. Replace or remove this based on your host operating system's
requirements. For example, the line continuation character for windows is a caret, ^ . Replace the back slash
with the caret.
Argument order : Do not change the order of the arguments unless you are very familiar with Docker
containers.
Replace value in brackets, {} , with your own values:

P L A C EH O L DER VA L UE F O RM AT O R EXA M P L E

{API_KEY} The endpoint key of the xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


Anomaly Detector resource on the
Azure Anomaly Detector Keys page.

{ENDPOINT_URI} The billing endpoint value is available See gathering required parameters for
on the Azure Anomaly Detector explicit examples.
Overview page.

NOTE
New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of
regional endpoints, see Custom subdomain names for Cognitive Services.

IMPORTANT
The Eula , Billing , and ApiKey options must be specified to run the container; otherwise, the container won't start.
For more information, see Billing. The ApiKey value is the Key from the Azure Anomaly Detector Resource keys page.

Anomaly Detector container Docker examples


The following Docker examples are for the Anomaly Detector container.
Basic example

docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \


mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
Eula=accept \
Billing={ENDPOINT_URI} \
ApiKey={API_KEY}

Logging example with command-line arguments

docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \


mcr.microsoft.com/azure-cognitive-services/decision/anomaly-detector \
Eula=accept \
Billing={ENDPOINT_URI} ApiKey={API_KEY} \
Logging:Console:LogLevel:Default=Information

Next steps
Deploy an Anomaly Detector container to Azure Container Instances
Learn more about Anomaly Detector API service
Deploy an Anomaly Detector container to Azure
Container Instances
3/5/2021 • 4 minutes to read • Edit Online

Learn how to deploy the Cognitive Services Anomaly Detector container to Azure Container Instances. This
procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated
container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using
containers can shift the developers' attention away from managing infrastructure to instead focusing on
application development.

Prerequisites
Use an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
Install the Azure CLI (az).
Docker engine and validate that the Docker CLI works in a console window.

Create an Anomaly Detector resource


1. Sign into the Azure portal.
2. Select Create Anomaly Detector resource.
3. Enter all required settings:

SET T IN G VA L UE

Name Desired name (2-64 characters)

Subscription Select appropriate subscription

Location Select any nearby and available location

Pricing Tier F0 - 10 Calls per second, 20K Transactions per month.


Or:
S0 - 80 Calls per second

Resource Group Select an available resource group

4. Click Create and wait for the resource to be created. After it is created, navigate to the resource page
5. Collect configured endpoint and an API key:

K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE

Over view Endpoint Copy the endpoint. It looks similar


to
https://<your-resource-
name>.cognitiveservices.azure.com/
K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE

Keys API Key Copy 1 of the two keys. It is a 32


alphanumeric-character string with
no spaces or dashes,
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
.

Create an Azure Container Instance resource from the Azure CLI


The YAML below defines the Azure Container Instance resource. Copy and paste the contents into a new file,
named my-aci.yaml and replace the commented values with your own. Refer to the template format for valid
YAML. Refer to the container repositories and images for the available image names and their corresponding
repository. For more information of the YAML reference for Container instances, see YAML reference: Azure
Container Instances.

apiVersion: 2018-10-01
location: # < Valid location >
name: # < Container Group name >
properties:
imageRegistryCredentials: # This is only required if you are pulling a non-public image that requires
authentication to access. For example Text Analytics for health.
- server: containerpreview.azurecr.io
username: # < The username for the preview container registry >
password: # < The password for the preview container registry >
containers:
- name: # < Container name >
properties:
image: # < Repository/Image name >
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: # < Service specific Endpoint URL >
- name: apikey
value: # < Service specific API key >
resources:
requests:
cpu: 4 # Always refer to recommended minimal resources
memoryInGb: 8 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
volumes: # This node, is only required for container instances that pull their model in at runtime, such
as LUIS.
- name: aci-file-share
azureFile:
shareName: # < File share name >
storageAccountName: # < Storage account name>
storageAccountKey: # < Storage account key >
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
NOTE
Not all locations have the same CPU and Memory availability. Refer to the location and resources table for the listing of
available resources for containers per location and OS.

We'll rely on the YAML file we created for the az container create command. From the Azure CLI, execute the
az container create command replacing the <resource-group> with your own. Additionally, for securing values
within a YAML deployment refer to secure values.

az container create -g <resource-group> -f my-aci.yaml

The output of the command is Running... if valid, after sometime the output changes to a JSON string
representing the newly created ACI resource. The container image is more than likely not be available for a
while, but the resource is now deployed.

TIP
Pay close attention to the locations of public preview Azure Cognitive Service offerings, as the YAML will needed to be
adjusted accordingly to match the location.

Validate that a container is running


There are several ways to validate that the container is running. Locate the External IP address and exposed port
of the container in question, and open your favorite web browser. Use the various request URLs below to
validate the container is running. The example request URLs listed below are http://localhost:5000 , but your
specific container may vary. Keep in mind that you're to rely on your container's External IP address and exposed
port.

REQ UEST URL P URP O SE

http://localhost:5000/ The container provides a home page.

http://localhost:5000/ready Requested with GET, this provides a verification that the


container is ready to accept a query against the model. This
request can be used for Kubernetes liveness and readiness
probes.

http://localhost:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.

http://localhost:5000/swagger The container provides a full set of documentation for the


endpoints and a Tr y it out feature. With this feature, you
can enter your settings into a web-based HTML form and
make the query without having to write any code. After the
query returns, an example CURL command is provided to
demonstrate the HTTP headers and body format that's
required.
Next steps
Review Install and run containers for pulling the container image and run the container
Review Configure containers for configuration settings
Learn more about Anomaly Detector API service
Configure Azure Cognitive Services virtual networks
3/5/2021 • 16 minutes to read • Edit Online

Azure Cognitive Services provides a layered security model. This model enables you to secure your Cognitive
Services accounts to a specific subset of networks. When network rules are configured, only applications
requesting data over the specified set of networks can access the account. You can limit access to your resources
with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of
subnets in Azure Virtual Networks.
An application that accesses a Cognitive Services resource when network rules are in effect requires
authorization. Authorization is supported with Azure Active Directory (Azure AD) credentials or with a valid API
key.

IMPORTANT
Turning on firewall rules for your Cognitive Services account blocks incoming requests for data by default. In order to
allow requests through, one of the following conditions needs to be met:

The request should originate from a service operating within an Azure Virtual Network (VNet) on the
allowed subnet list of the target Cognitive Services account. The endpoint in requests originated from
VNet needs to be set as the custom subdomain of your Cognitive Services account.
Or the request should originate from an allowed list of IP addresses.
Requests that are blocked include those from other Azure services, from the Azure portal, from logging and
metrics services, and so on.

NOTE
This article has been updated to use the Azure Az PowerShell module. The Az PowerShell module is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see Install Azure PowerShell.
To learn how to migrate to the Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.

Scenarios
To secure your Cognitive Services resource, you should first configure a rule to deny access to traffic from all
networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from
specific VNets. This configuration enables you to build a secure network boundary for your applications. You can
also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections
from specific internet or on-premises clients.
Network rules are enforced on all network protocols to Azure Cognitive Services, including REST and
WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be
configured. You can apply network rules to existing Cognitive Services resources, or when you create new
Cognitive Services resources. Once network rules are applied, they're enforced for all requests.

Supported regions and service offerings


Virtual networks (VNETs) are supported in regions where Cognitive Services are available. Cognitive Services
supports service tags for network rules configuration. The services listed below are included in the
CognitiveSer vicesManagement service tag.
Anomaly Detector
Computer Vision
Content Moderator
Custom Vision
Face
Form Recognizer
Immersive Reader
Language Understanding (LUIS)
Personalizer
Speech Services
Text Analytics
QnA Maker
Translator Text

NOTE
If you're using LUIS or Speech Services, the CognitiveSer vicesManagement tag only enables you use the service using
the SDK or REST API. To access and use LUIS portal and/or Speech Studio from a virtual network, you will need to use the
following tags:
AzureActiveDirector y
AzureFrontDoor.Frontend
AzureResourceManager
CognitiveSer vicesManagement

Change the default network access rule


By default, Cognitive Services resources accept connections from clients on any network. To limit access to
selected networks, you must first change the default action.

WARNING
Making changes to network rules can impact your applications' ability to connect to Azure Cognitive Services. Setting the
default network rule to deny blocks all access to the data unless specific network rules that grant access are also applied.
Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If
you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses
from your on-premises network.

Managing default network access rules


You can manage default network access rules for Cognitive Services resources through the Azure portal,
PowerShell, or the Azure CLI.
Azure portal
PowerShell
Azure CLI

1. Go to the Cognitive Services resource you want to secure.


2. Select the RESOURCE MANAGEMENT menu called Vir tual network .
3. To deny access by default, choose to allow access from Selected networks . With the Selected
networks setting alone, unaccompanied by configured Vir tual networks or Address ranges - all
access is effectively denied. When all access is denied, requests attempting to consume the Cognitive
Services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to
configure the Cognitive Services resource.
4. To allow traffic from all networks, choose to allow access from All networks .

5. Select Save to apply your changes.

Grant access from a virtual network


You can configure Cognitive Services resources to allow access only from specific subnets. The allowed subnets
may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging
to a different Azure Active Directory tenant.
Enable a service endpoint for Azure Cognitive Services within the VNet. The service endpoint routes traffic from
the VNet through an optimal path to the Azure Cognitive Services service. The identities of the subnet and the
virtual network are also transmitted with each request. Administrators can then configure network rules for the
Cognitive Services resource that allow requests to be received from specific subnets in a VNet. Clients granted
access via these network rules must continue to meet the authorization requirements of the Cognitive Services
resource to access the data.
Each Cognitive Services resource supports up to 100 virtual network rules, which may be combined with IP
network rules.
Required permissions
To apply a virtual network rule to a Cognitive Services resource, the user must have the appropriate permissions
for the subnets being added. The required permission is the default Contributor role, or the Cognitive Services
Contributor role. Required permissions can also be added to custom role definitions.
Cognitive Services resource and the virtual networks granted access may be in different subscriptions, including
subscriptions that are a part of a different Azure AD tenant.

NOTE
Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory
tenant are currently only supported through Powershell, CLI and REST APIs. Such rules cannot be configured through the
Azure portal, though they may be viewed in the portal.

Managing virtual network rules


You can manage virtual network rules for Cognitive Services resources through the Azure portal, PowerShell, or
the Azure CLI.

Azure portal
PowerShell
Azure CLI

1. Go to the Cognitive Services resource you want to secure.


2. Select the RESOURCE MANAGEMENT menu called Vir tual network .
3. Check that you've selected to allow access from Selected networks .
4. To grant access to a virtual network with an existing network rule, under Vir tual networks , select Add
existing vir tual network .

5. Select the Vir tual networks and Subnets options, and then select Enable .
6. To create a new virtual network and grant it access, select Add new vir tual network .

7. Provide the information necessary to create the new virtual network, and then select Create .
NOTE
If a service endpoint for Azure Cognitive Services wasn't previously configured for the selected virtual network and
subnets, you can configure it as part of this operation.
Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection
during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use
Powershell, CLI or REST APIs.

8. To remove a virtual network or subnet rule, select ... to open the context menu for the virtual network or
subnet, and select Remove .
9. Select Save to apply your changes.

IMPORTANT
Be sure to set the default rule to deny , or network rules have no effect.

Grant access from an internet IP range


You can configure Cognitive Services resources to allow access from specific public internet IP address ranges.
This configuration grants access to specific services and on-premises networks, effectively blocking general
internet traffic.
Provide allowed internet address ranges using CIDR notation in the form 16.17.18.0/24 or as individual IP
addresses like 16.17.18.19 .

TIP
Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using
individual IP address rules.

IP network rules are only allowed for public internet IP addresses. IP address ranges reserved for private
networks (as defined in RFC 1918) aren't allowed in IP rules. Private networks include addresses that start with
10.* , 172.16.* - 172.31.* , and 192.168.* .

Only IPV4 addresses are supported at this time. Each Cognitive Services resource supports up to 100 IP network
rules, which may be combined with Virtual network rules.
Configuring access from on-premises networks
To grant access from your on-premises networks to your Cognitive Services resource with an IP network rule,
you must identify the internet facing IP addresses used by your network. Contact your network administrator
for help.
If you're using ExpressRoute on-premises for public peering or Microsoft peering, you'll need to identify the NAT
IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied
to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the
NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow
access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To
find your public peering ExpressRoute circuit IP addresses, open a support ticket with ExpressRoute via the
Azure portal. Learn more about NAT for ExpressRoute public and Microsoft peering.
Managing IP network rules
You can manage IP network rules for Cognitive Services resources through the Azure portal, PowerShell, or the
Azure CLI.

Azure portal
PowerShell
Azure CLI

1. Go to the Cognitive Services resource you want to secure.


2. Select the RESOURCE MANAGEMENT menu called Vir tual network .
3. Check that you've selected to allow access from Selected networks .
4. To grant access to an internet IP range, enter the IP address or address range (in CIDR format) under
Firewall > Address Range . Only valid public IP (non-reserved) addresses are accepted.

5. To remove an IP network rule, select the trash can icon next to the address range.
6. Select Save to apply your changes.

IMPORTANT
Be sure to set the default rule to deny , or network rules have no effect.

Use private endpoints


You can use private endpoints for your Cognitive Services resources to allow clients on a virtual network (VNet)
to securely access data over a Private Link. The private endpoint uses an IP address from the VNet address space
for your Cognitive Services resource. Network traffic between the clients on the VNet and the resource traverses
the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet.
Private endpoints for Cognitive Services resources let you:
Secure your Cognitive Services resource by configuring the firewall to block all connections on the public
endpoint for the Cognitive Services service.
Increase security for the VNet, by enabling you to block exfiltration of data from the VNet.
Securely connect to Cognitive Services resources from on-premises networks that connect to the VNet using
VPN or ExpressRoutes with private-peering.
Conceptual overview
A private endpoint is a special network interface for an Azure resource in your VNet. Creating a private endpoint
for your Cognitive Services resource provides secure connectivity between clients in your VNet and your
resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection
between the private endpoint and the Cognitive Services service uses a secure private link.
Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same
connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech
Services, which require a separate endpoint. See the section on Private endpoints with the Speech Services.
Private endpoints can be used with all protocols supported by the Cognitive Services resource, including REST.
Private endpoints can be created in subnets that use Service Endpoints. Clients in a subnet can connect to one
Cognitive Services resource using private endpoint, while using service endpoints to access others.
When you create a private endpoint for a Cognitive Services resource in your VNet, a consent request is sent for
approval to the Cognitive Services resource owner. If the user requesting the creation of the private endpoint is
also an owner of the resource, this consent request is automatically approved.
Cognitive Services resource owners can manage consent requests and the private endpoints, through the
'Private endpoints' tab for the Cognitive Services resource in the Azure portal.
Private endpoints
When creating the private endpoint, you must specify the Cognitive Services resource it connects to. For more
information on creating a private endpoint, see:
Create a private endpoint using the Private Link Center in the Azure portal
Create a private endpoint using Azure CLI
Create a private endpoint using Azure PowerShell
Connecting to private endpoints
Clients on a VNet using the private endpoint should use the same connection string for the Cognitive Services
resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a
separate endpoint. See the section on Private endpoints with the Speech Services. We rely upon DNS resolution
to automatically route the connections from the VNet to the Cognitive Services resource over a private link.
We create a private DNS zone attached to the VNet with the necessary updates for the private endpoints, by
default. However, if you're using your own DNS server, you may need to make additional changes to your DNS
configuration. The section on DNS changes below describes the updates required for private endpoints.
Private endpoints with the Speech Services
See Using Speech Services with private endpoints provided by Azure Private Link.
DNS changes for private endpoints
When you create a private endpoint, the DNS CNAME resource record for the Cognitive Services resource is
updated to an alias in a subdomain with the prefix 'privatelink'. By default, we also create a private DNS zone,
corresponding to the 'privatelink' subdomain, with the DNS A resource records for the private endpoints.
When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public
endpoint of the Cognitive Services resource. When resolved from the VNet hosting the private endpoint, the
endpoint URL resolves to the private endpoint's IP address.
This approach enables access to the Cognitive Services resource using the same connection string for clients in
the VNet hosting the private endpoints and clients outside the VNet.
If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain
name (FQDN) for the Cognitive Services resource endpoint to the private endpoint IP address. Configure your
DNS server to delegate your private link subdomain to the private DNS zone for the VNet.

TIP
When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Cognitive
Services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating
the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and
adding the DNS A records.

For more information on configuring your own DNS server to support private endpoints, refer to the following
articles:
Name resolution for resources in Azure virtual networks
DNS configuration for private endpoints
Pricing
For pricing details, see Azure Private Link pricing.

Next steps
Explore the various Azure Cognitive Services
Learn more about Azure Virtual Network Service Endpoints
Authenticate requests to Azure Cognitive Services
3/5/2021 • 8 minutes to read • Edit Online

Each request to an Azure Cognitive Service must include an authentication header. This header passes along a
subscription key or access token, which is used to validate your subscription for a service or group of services.
In this article, you'll learn about three ways to authenticate a request and the requirements for each.
Authenticate with a single-service or multi-service subscription key
Authenticate with a token
Authenticate with Azure Active Directory (AAD)

Prerequisites
Before you make a request, you need an Azure account and an Azure Cognitive Services subscription. If you
already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to
get you set up in minutes: Create a Cognitive Services account for Azure.
You can get your subscription key from the Azure portal after creating your account.

Authentication headers
Let's quickly review the authentication headers available for use with Azure Cognitive Services.

H EA DER DESC RIP T IO N

Ocp-Apim-Subscription-Key Use this header to authenticate with a subscription key for a


specific service or a multi-service subscription key.

Ocp-Apim-Subscription-Region This header is only required when using a multi-service


subscription key with the Translator service. Use this header
to specify the subscription region.

Authorization Use this header if you are using an authentication token. The
steps to perform a token exchange are detailed in the
following sections. The value provided follows this format:
Bearer <TOKEN> .

Authenticate with a single-service subscription key


The first option is to authenticate a request with a subscription key for a specific service, like Translator. The keys
are available in the Azure portal for each resource that you've created. To use a subscription key to authenticate
a request, it must be passed along as the Ocp-Apim-Subscription-Key header.
These sample requests demonstrates how to use the Ocp-Apim-Subscription-Key header. Keep in mind, when
using this sample you'll need to include a valid subscription key.
This is a sample call to the Bing Web Search API:

curl -X GET 'https://api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis' \


-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp
This is a sample call to the Translator service:

curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \


-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp

The following video demonstrates using a Cognitive Services key.

Authenticate with a multi-service subscription key


WARNING
At this time, these services don't support multi-service keys: QnA Maker, Speech Services, Custom Vision, and Anomaly
Detector.

This option also uses a subscription key to authenticate requests. The main difference is that a subscription key
is not tied to a specific service, rather, a single key can be used to authenticate requests for multiple Cognitive
Services. See Cognitive Services pricing for information about regional availability, supported features, and
pricing.
The subscription key is provided in each request as the Ocp-Apim-Subscription-Key header.

Supported regions
When using the multi-service subscription key to make a request to api.cognitive.microsoft.com , you must
include the region in the URL. For example: westus.api.cognitive.microsoft.com .
When using multi-service subscription key with the Translator service, you must specify the subscription region
with the Ocp-Apim-Subscription-Region header.
Multi-service authentication is supported in these regions:
australiaeast
brazilsouth
canadacentral
centralindia
eastasia
eastus
japaneast
northeurope
southcentralus
southeastasia
uksouth
westcentralus
westeurope
westus
westus2

Sample requests
This is a sample call to the Bing Web Search API:

curl -X GET 'https://YOUR-REGION.api.cognitive.microsoft.com/bing/v7.0/search?q=Welsch%20Pembroke%20Corgis'


\
-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' | json_pp

This is a sample call to the Translator service:

curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \


-H 'Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY' \
-H 'Ocp-Apim-Subscription-Region: YOUR_SUBSCRIPTION_REGION' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp

Authenticate with an authentication token


Some Azure Cognitive Services accept, and in some cases require, an authentication token. Currently, these
services support authentication tokens:
Text Translation API
Speech Services: Speech-to-text REST API
Speech Services: Text-to-speech REST API

NOTE
QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see QnA Maker: Get
answer from knowledge base.

WARNING
The services that support authentication tokens may change over time, please check the API reference for a service
before using this authentication method.

Both single service and multi-service subscription keys can be exchanged for authentication tokens.
Authentication tokens are valid for 10 minutes.
Authentication tokens are included in a request as the Authorization header. The token value provided must be
preceded by Bearer , for example: Bearer YOUR_AUTH_TOKEN .
Sample requests
Use this URL to exchange a subscription key for an authentication token:
https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken .

curl -v -X POST \
"https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
-H "Content-type: application/x-www-form-urlencoded" \
-H "Content-length: 0" \
-H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"

These multi-service regions support token exchange:


australiaeast
brazilsouth
canadacentral
centralindia
eastasia
eastus
japaneast
northeurope
southcentralus
southeastasia
uksouth
westcentralus
westeurope
westus
westus2

After you get an authentication token, you'll need to pass it in each request as the Authorization header. This is
a sample call to the Translator service:

curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=de' \


-H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
-H 'Content-Type: application/json' \
--data-raw '[{ "text": "How much for the cup of coffee?" }]' | json_pp

Authenticate with Azure Active Directory


IMPORTANT
1. Currently, only the Computer Vision API, Face API, Text Analytics API, Immersive Reader, Form Recognizer, Anomaly
Detector, QnA Maker, and all Bing services except Bing Custom Search support authentication using Azure Active
Directory (AAD).
2. AAD authentication needs to be always used together with custom subdomain name of your Azure resource. Regional
endpoints does not support AAD authentication.

In the previous sections, we showed you how to authenticate against Azure Cognitive Services using either a
single-service or multi-service subscription key. While these keys provide a quick and easy path to start
development, they fall short in more complex scenarios that require Azure role-based access control (Azure
RBAC). Let's take a look at what's required to authenticate using Azure Active Directory (AAD).
In the following sections, you'll use either the Azure Cloud Shell environment or the Azure CLI to create a
subdomain, assign roles, and obtain a bearer token to call the Azure Cognitive Services. If you get stuck, links
are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
Create a resource with a custom subdomain
The first step is to create a custom subdomain. If you want to use an existing Cognitive Services resource which
does not have custom subdomain name, follow the instructions in Cognitive Services Custom Subdomains to
enable custom subdomain for your resource.
1. Start by opening the Azure Cloud Shell. Then select a subscription:

Set-AzContext -SubscriptionName <SubscriptionName>

2. Next, create a Cognitive Services resource with a custom subdomain. The subdomain name needs to be
globally unique and cannot include special characters, such as: ".", "!", ",".

$account = New-AzCognitiveServicesAccount -ResourceGroupName <RESOURCE_GROUP_NAME> -name


<ACCOUNT_NAME> -Type <ACCOUNT_TYPE> -SkuName <SUBSCRIPTION_TYPE> -Location <REGION> -
CustomSubdomainName <UNIQUE_SUBDOMAIN>

3. If successful, the Endpoint should show the subdomain name unique to your resource.
Assign a role to a service principal
Now that you have a custom subdomain associated with your resource, you're going to need to assign a role to
a service principal.

NOTE
Keep in mind that Azure role assignments may take up to five minutes to propagate.

1. First, let's register an AAD application.

$SecureStringPassword = ConvertTo-SecureString -String <YOUR_PASSWORD> -AsPlainText -Force

$app = New-AzADApplication -DisplayName <APP_DISPLAY_NAME> -IdentifierUris <APP_URIS> -Password


$SecureStringPassword

You're going to need the ApplicationId in the next step.


2. Next, you need to create a service principal for the AAD application.

New-AzADServicePrincipal -ApplicationId <APPLICATION_ID>

NOTE
If you register an application in the Azure portal, this step is completed for you.

3. The last step is to assign the "Cognitive Services User" role to the service principal (scoped to the
resource). By assigning a role, you're granting service principal access to this resource. You can grant the
same service principal access to multiple resources in your subscription.
NOTE
The ObjectId of the service principal is used, not the ObjectId for the application. The ACCOUNT_ID will be the
Azure resource Id of the Cognitive Services account you created. You can find Azure resource Id from "properties"
of the resource in Azure portal.

New-AzRoleAssignment -ObjectId <SERVICE_PRINCIPAL_OBJECTID> -Scope <ACCOUNT_ID> -RoleDefinitionName


"Cognitive Services User"

Sample request
In this sample, a password is used to authenticate the service principal. The token provided is then used to call
the Computer Vision API.
1. Get your TenantId :

$context=Get-AzContext
$context.Tenant.Id

2. Get a token:

NOTE
If you're using Azure Cloud Shell, the SecureClientSecret class isn't available.

PowerShell
Azure Cloud Shell

$authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -


ArgumentList "https://login.windows.net/<TENANT_ID>"
$secureSecretObject = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.SecureClientSecret"
-ArgumentList $SecureStringPassword
$clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -
ArgumentList $app.ApplicationId, $secureSecretObject
$token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/",
$clientCredential).Result
$token

3. Call the Computer Vision API:

$url = $account.Endpoint+"vision/v1.0/models"
$result = Invoke-RestMethod -Uri $url -Method Get -Headers
@{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
$result | ConvertTo-Json

Alternatively, the service principal can be authenticated with a certificate. Besides service principal, user principal
is also supported by having permissions delegated through another AAD application. In this case, instead of
passwords or certificates, users would be prompted for two-factor authentication when acquiring token.

Authorize access to managed identities


Cognitive Services support Azure Active Directory (Azure AD) authentication with managed identities for Azure
resources. Managed identities for Azure resources can authorize access to Cognitive Services resources using
Azure AD credentials from applications running in Azure virtual machines (VMs), function apps, virtual machine
scale sets, and other services. By using managed identities for Azure resources together with Azure AD
authentication, you can avoid storing credentials with your applications that run in the cloud.
Enable managed identities on a VM
Before you can use managed identities for Azure resources to authorize access to Cognitive Services resources
from your VM, you must enable managed identities for Azure resources on the VM. To learn how to enable
managed identities for Azure Resources, see:
Azure portal
Azure PowerShell
Azure CLI
Azure Resource Manager template
Azure Resource Manager client libraries
For more information about managed identities, see Managed identities for Azure resources.

See also
What is Cognitive Services?
Cognitive Services pricing
Custom subdomains
Multivariate time series Anomaly Detector best
practices
4/12/2021 • 6 minutes to read • Edit Online

This article will provide guidance around recommended practices to follow when using the multivariate
Anomaly Detector APIs.

How to prepare data for training


To use the Anomaly Detector multivariate APIs, we need to train our own model before using detection. Data
used for training is a batch of time series, each time series should be in CSV format with two columns,
timestamp and value. All of the time series should be zipped into one zip file and uploaded to Azure Blob
storage. By default the file name will be used to represent the variable for the time series. Alternatively, an extra
meta.json file can be included in the zip file if you wish the name of the variable to be different from the .zip file
name. Once we generate a blob SAS (Shared access signatures) URL, we can use it for training.

Data quality and quantity


The Anomaly Detector multivariate API uses state-of-the-art deep neural networks to learn normal patterns
from historical data and predicts whether future values are anomalies. The quality and quantity of training data
is important to train an optimal model. As the model learns normal patterns from historical data, the training
data should represent the overall normal state of the system. It is hard for the model to learn these types of
patterns if the training data is full of anomalies. Also, the model has millions of parameters and it needs a
minimum number of data points to learn an optimal set of parameters. The general rule is that you need to
provide at least 15,000 data points per variable to properly train the model. The more data, the better the model.
It is common that many time series have missing values, which may affect the performance of trained models.
The missing ratio of each time series should be controlled under a reasonable value. A time series having 90%
values missing provides little information about normal patterns of the system. Even worse, the model may
consider filled values as normal patterns, which are usually straight segments or constant values. When new
data flows in, the data might be detected as anomalies.
A recommended max missing value threshold is 20%, but a higher threshold might be acceptable under some
circumstances. For example, if you have a time series with one-minute granularity and another time series with
hourly granularity. Each hour there are 60 data points per minute of data and 1 data point for hourly data, which
means that the missing ratio for hourly data is 98.33%. However, it is fine to fill the hourly data with the only
value if the hourly time series does not typically fluctuate too much.

Parameters
Sliding window
Multivariate anomaly detection takes a segment of data points of length slidingWindow as input and decides if
the next data point is an anomaly. The larger the sample length, the more data will be considered for a decision.
You should keep two things in mind when choosing a proper value for slidingWindow : properties of input data,
and the trade-off between training/inference time and potential performance improvement. slidingWindow
consists of an integer between 28 and 2880. You may decide how many data points are used as inputs based on
whether your data is periodic, and the sampling rate for your data.
When your data is periodic, you may include 1 - 3 cycles as an input and when your data is sampled at a high
frequency (small granularity) like minute-level or second-level data, you may select more data as an input.
Another issue is that longer inputs may cause longer training/inference time, and there is no guarantee that
more input points will lead to performance gains. Whereas too few data points, may make the model difficult to
converge to an optimal solution. For example, it is hard to detect anomalies when the input data only has two
points.
Align mode
The parameter alignMode is used to indicate how you want to align multiple time series on time stamps. This is
because many time series have missing values and we need to align them on the same time stamps before
further processing. There are two options for this parameter, inner join and outer join . inner join means
we will report detection results on timestamps on which ever y time series has a value, while outer join
means we will report detection results on time stamps for any time series that has a value. The alignMode
will also affect the input sequence of the model , so choose a suitable alignMode for your scenario
because the results might be significantly different.
Here we show an example to explain different alignModel values.
Series1

T IM ESTA M P VA L UE

2020-11-01 1

2020-11-02 2

2020-11-04 4

2020-11-05 5

Series2

T IM ESTA M P VA L UE

2020-11-01 1

2020-11-02 2

2020-11-03 3

2020-11-04 4

Inner join two series

T IM ESTA M P SERIES1 SERIES2

2020-11-01 1 1

2020-11-02 2 2

2020-11-04 4 4

Outer join two series


T IM ESTA M P SERIES1 SERIES2

2020-11-01 1 1

2020-11-02 2 2

2020-11-03 NA 3

2020-11-04 4 4

2020-11-05 5 NA

Fill not available (NA )


After variables are aligned on timestamp by outer join, there might be some Not Available ( NA ) value in some
of the variables. You can specify method to fill this NA value. The options for the fillNAMethod are Linear ,
Previous , Subsequent , Zero , and Fixed .

O P T IO N M ET H O D

Linear Fill NA values by linear interpolation

Previous Propagate last valid value to fill gaps. Example:


[1, 2, nan, 3, nan, 4] -> [1, 2, 2, 3, 3, 4]

Subsequent Use next valid value to fill gaps. Example:


[1, 2, nan, 3, nan, 4] -> [1, 2, 3, 3, 4, 4]

Zero Fill NA values with 0.

Fixed Fill NA values with a specified valid value that should be


provided in paddingValue .

Model analysis
Training latency
Multivariate Anomaly Detection training can be time-consuming. Especially when you have a large quantity of
timestamps used for training. Therefore, we allow part of the training process to be asynchronous. Typically,
users submit train task through Train Model API. Then get model status through the Get Multivariate Model API
. Here we demonstrate how to extract the remaining time before training completes. In the Get Multivariate
Model API response, there is an item named diagnosticsInfo . In this item, there is a modelState element. To
calculate the remaining time, we need to use epochIds and latenciesInSeconds . An epoch represents one
complete cycle through the training data. Every 10 epochs, we will output status information. In total, we will
train for 100 epochs, the latency indicates how long an epoch takes. With this information, we know remaining
time left to train the model.
Model performance
Multivariate Anomaly Detection, as an unsupervised model. The best way to evaluate it is to check the anomaly
results manually. In the Get Multivariate Model response, we provide some basic info for us to analyze model
performance. In the modelState element returned by the Get Multivariate Model API, we can use trainLosses
and validationLosses to evaluate whether the model has been trained as expected. In most cases, the two
losses will decrease gradually. Another piece of information for us to analyze model performance against is in
variableStates . The variables state list is ranked by filledNARatio in descending order. The larger the worse
our performance, usually we need to reduce this NA ratio as much as possible. NA could be caused by missing
values or unaligned variables from a timestamp perspective.

Next steps
Quickstarts.
Learn about the underlying algorithms that power Anomaly Detector Multivariate
Best practices for using the Anomaly Detector API
3/5/2021 • 4 minutes to read • Edit Online

The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results
can be impacted by:
How your time series data is prepared.
The Anomaly Detector API parameters that were used.
The number of data points in your API request.
Use this article to learn about best practices for using the API to get the best results for your data.

When to use batch (entire) or latest (last) point anomaly detection


The Anomaly Detector API's batch detection endpoint lets you detect anomalies through your entire times series
data. In this detection mode, a single statistical model is created and applied to each point in the data set. If your
time series has the below characteristics, we recommend using batch detection to preview your data in one API
call.
A seasonal time series, with occasional anomalies.
A flat trend time series, with occasional spikes/dips.
We don't recommend using batch anomaly detection for real-time data monitoring, or using it on time series
data that doesn't have the above characteristics.
Batch detection creates and applies only one model, the detection for each point is done in the context of
the whole series. If the time series data trends up and down without seasonality, some points of change
(dips and spikes in the data) may be missed by the model. Similarly, some points of change that are less
significant than ones later in the data set may not be counted as significant enough to be incorporated
into the model.
Batch detection is slower than detecting the anomaly status of the latest point when doing real-time data
monitoring, because of the number of points being analyzed.
For real-time data monitoring, we recommend detecting the anomaly status of your latest data point only. By
continuously applying latest point detection, streaming data monitoring can be done more efficiently and
accurately.
The example below describes the impact these detection modes can have on performance. The first picture
shows the result of continuously detecting the anomaly status latest point along 28 previously seen data points.
The red points are anomalies.

Below is the same data set using batch anomaly detection. The model built for the operation has ignored several
anomalies, marked by rectangles.
Data preparation
The Anomaly Detector API accepts time series data formatted into a JSON request object. A time series can be
any numerical data recorded over time in sequential order. You can send windows of your time series data to the
Anomaly Detector API endpoint to improve the API's performance. The minimum number of data points you can
send is 12, and the maximum is 8640 points. Granularity is defined as the rate that your data is sampled at.
Data points sent to the Anomaly Detector API must have a valid Coordinated Universal Time (UTC) timestamp,
and a numerical value.

{
"granularity": "daily",
"series": [
{
"timestamp": "2018-03-01T00:00:00Z",
"value": 32858923
},
{
"timestamp": "2018-03-02T00:00:00Z",
"value": 29615278
},
]
}

If your data is sampled at a non-standard time interval, you can specify it by adding the customInterval
attribute in your request. For example, if your series is sampled every 5 minutes, you can add the following to
your JSON request:

{
"granularity" : "minutely",
"customInterval" : 5
}

Missing data points


Missing data points are common in evenly distributed time series data sets, especially ones with a fine
granularity (A small sampling interval. For example, data sampled every few minutes). Missing less than 10% of
the expected number of points in your data shouldn't have a negative impact on your detection results. Consider
filling gaps in your data based on its characteristics like substituting data points from an earlier period, linear
interpolation, or a moving average.
Aggregate distributed data
The Anomaly Detector API works best on an evenly distributed time series. If your data is randomly distributed,
you should aggregate it by a unit of time, such as Per-minute, hourly, or daily.

Anomaly detection on data with seasonal patterns


If you know that your time series data has a seasonal pattern (one that occurs at regular intervals), you can
improve the accuracy and API response time.
Specifying a period when you construct your JSON request can reduce anomaly detection latency by up to
50%. The period is an integer that specifies roughly how many data points the time series takes to repeat a
pattern. For example, a time series with one data point per day would have a period as 7 , and a time series
with one point per hour (with the same weekly pattern) would have a period of 7*24 . If you're unsure of your
data's patterns, you don't have to specify this parameter.
For best results, provide four period 's worth of data point, plus an additional one. For example, hourly data
with a weekly pattern as described above should provide 673 data points in the request body ( 7 * 24 * 4 + 1 ).
Sampling data for real-time monitoring
If your streaming data is sampled at a short interval (for example seconds or minutes), sending the
recommended number of data points may exceed the Anomaly Detector API's maximum number allowed (8640
data points). If your data shows a stable seasonal pattern, consider sending a sample of your time series data at
a larger time interval, like hours. Sampling your data in this way can also noticeably improve the API response
time.

Next steps
What is the Anomaly Detector API?
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
Predictive maintenance solution with Anomaly
Detector multivariate
4/12/2021 • 2 minutes to read • Edit Online

Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights
through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by
performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks
down.
Monitoring the health status of equipment can be challenging, as each component inside the equipment can
generate dozens of signals, for example vibration, orientation, and rotation. This can be even more complex
when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining
different rules for those signals and correlating them with each other manually can be costly. Anomaly
Detector's multivariate feature allows:
Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted
for in the model.
In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and
incident root cause analysis.
The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained
specifically for different types of equipment.
Here, we provide a reference architecture for a predictive maintenance solution based on Anomaly Detector
multivariate.

Reference architecture

In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then
processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming
event will trigger real-time detection with the trained model. In general, there will be a module to manage the
multivariate model life cycle, like Bridge Service in this architecture.
Model training : Before using the Anomaly Detector multivariate to detect anomalies for a component or
equipment. We need to train a model on specific signals (time-series) generated by this entity. The Bridge
Service will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID
in the Model Meta storage.
Model validation : Training time of a certain model could be varied based on the training data volume. The
Bridge Service could query model status and diagnostic info on a regular basis. Validating model quality could
be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the
model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also
perform detection on historical data with the trained model and evaluate the result to backtest the validity of the
model.
Model inference : Online detection will be performed with the valid model, and the result ID can be stored in
the Inference table. Both the training process and the inference process are done in an asynchronous manner. In
general, a detection task can be completed within seconds. Signals used for detection should be the same ones
that have been used for training. For example, if we use vibration, orientation, and rotation for training, in
detection the three signals should be included as an input.
Incident aler ting The detection results can be queried with result IDs. Each result contains severity of each
anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and
which signal caused this incident. Different thresholds can be set on the severity to generate alerts and
notifications to be sent to field engineers to conduct maintenance work.

Next steps
Quickstarts.
Best Practices: This article is about recommended patterns to use with the multivariate APIs.
Troubleshooting the multivariate API
4/12/2021 • 4 minutes to read • Edit Online

This article provides guidance on how to troubleshoot and remediate common HTTP error messages when
using the multivariate API.
Multivariate error codes
M ET H O D H T T P ERRO R C O DE ERRO R M ESSA GE A C T IO N TO TA K E

Train a Multivariate 400 The 'source' field is The key word "source" has
Anomaly Detection Model required in the not been specified correctly.
request.
The format should be
"{\"source\": \|" <SAS
URL>\"}"

Train a Multivariate 400 The source field must The source field must be a
Anomaly Detection Model be a valid sas blob valid blob container sas url.
url

Train a Multivariate 400 The 'startTime' field Add startTime in the


Anomaly Detection Model is required in the request
request.

Train a Multivariate 400 The 'endTime' field is Add endTime in the


Anomaly Detection Model required in the request.
request.

Train a Multivariate 400 Invalid Timestamp The timestamp in the csv


Anomaly Detection Model format. file zipped in the source url
is in invalid format or
"startTime", "endTime" is in
invalid format.

Train a Multivariate 400 The displayName length DisplayName is an optional


Anomaly Detection Model exceeds maximum parameter to be used for
allowed length 24.
users to distinguish
different models. A valid
displayName must be
smaller than 24 characters.

Train a Multivariate 400 The 'slidingWindow' Sliding window must be in a


Anomaly Detection Model field must be an valid range.
integer between 28 and
2880.

Train a Multivariate 401 Unable to download The URL does not have the
Anomaly Detection Model blobs on the Azure right permissions. The list
Blob storage account.
flag is not set. The customer
should re-create the SAS
URL and make sure the
read and list flags is
checked (for example using
Storage Explorer)
M ET H O D H T T P ERRO R C O DE ERRO R M ESSA GE A C T IO N TO TA K E

Train a Multivariate 413 Unable to process the The data in the blob
Anomaly Detection Model dataset. Number of container exceeds the limit
variables exceed the
limit (300). of currently 300 variables.
The customer has to point
to reduce the variable size.

Train a Multivariate 413 Valid Timestamps in The max number of points


Anomaly Detection Model the dataset exceeds can be used for training 1
the limit (1 million
points), please change million. Customers can
startTime or endTime reduce variable size or
parameters. change startTime or
endTime

Train a Multivariate 413 Unable to process The data in the blob


Anomaly Detection Model dataset. Size of container exceeds the limit
dataset exceeds size
limit (2GB). of currently 4 MB. The
customer has to point to a
blob with smaller data.

Detect Multivariate 404 The model does not The model ID is invalid.
Anomaly exist. Customers need to train a
model before using it.

Detect Multivariate 400 The model is not ready The model is not ready yet.
Anomaly yet. Customers need to call Get
Multivariate Model api to
check model status.

Detect Multivariate 400 The 'source' field is The key word "source" has
Anomaly required in the not been specified correctly.
request.
The format should be
"{\"source\": \|" <SAS
URL>\"}"

Detect Multivariate 400 The source field must The source field must be a
Anomaly be a valid sas blob valid blob container sas url.
url

Detect Multivariate 400 The 'startTime' field Add startTime in the


Anomaly is required in the request
request.

Detect Multivariate 400 The 'endTime' field is Add endTime in the


Anomaly required in the request.
request.

Detect Multivariate 400 Invalid Timestamp The timestamp in the csv


Anomaly format. file zipped in the source url
is in an invalid format or
"startTime", "endTime" is in
invalid format.
M ET H O D H T T P ERRO R C O DE ERRO R M ESSA GE A C T IO N TO TA K E

Detect Multivariate 400 The corresponding file One variable has been used
Anomaly of the variable does in train, but it cannot be
not exist.
found when the customer
uses the corresponding
model to do detection.
Customers need to add this
variable and then submit
the detection request.

Detect Multivariate 413 Unable to process the The data in the blob
Anomaly dataset. Number of container exceeds the limit
variables exceed the
limit (300). of currently 300 variables.
The customer has to point
to reduce the variable size.

Detect Multivariate 413 The limit timestamps The max timestamps to be


Anomaly of one detection detected in one detection
request is 2880,
please change request is 2880, customers
startTime or endTime need to change the
parameters. startTime or endTime and
then submit detection
request.

Detect Multivariate 413 Unable to process The data in the blob


Anomaly dataset. Size of container exceeds the limit
dataset exceeds size
limit (2GB). of currently 4 MB. The
customer has to point to a
blob with smaller data.

Get Multivariate Model 404 Model with 'id=<input The ID is not a valid model
model ID>' not found. ID. Use GET models to find
all valid model Ids.

Get Multivariate Model 404 Model with id=<input The ID is not a valid model
model ID>' not found. ID. Use GET models to find
all valid model Ids.

Get Multivariate Anomaly 404 Result with 'id=<input The ID is not a valid result
Detection Result result ID>' not found. ID. Resubmit your detection
request.

Delete Multivariate Model 404 Location for model The ID is not a valid model
with 'id=<input model ID. Use GET models to find
ID>' not found.
all valid model Ids.
Tutorial: Visualize anomalies using batch detection
and Power BI
3/26/2021 • 5 minutes to read • Edit Online

Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take
an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it.
In this tutorial, you'll learn how to:
Use Power BI Desktop to import and transform a time series data set
Integrate Power BI Desktop with the Anomaly Detector API for batch anomaly detection
Visualize anomalies found within your data, including expected and seen values, and anomaly detection
boundaries.

Prerequisites
An Azure subscription
Microsoft Power BI Desktop, available for free.
An excel file (.xlsx) containing time series data points. The example data for this quickstart can be found on
GitHub
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll do this later in the quickstart.

NOTE
For best results when using the Anomaly Detector API, your JSON-formatted time series data should include:
data points separated by the same interval, with no more than 10% of the expected number of points missing.
at least 12 data points if your data doesn't have a clear seasonal pattern.
at least 4 pattern occurrences if your data does have a clear seasonal pattern.

Load and format the time series data


To get started, open Power BI Desktop and load the time series data you downloaded from the prerequisites.
This excel file contains a series of Coordinated Universal Time (UTC) timestamp and value pairs.

NOTE
Power BI can use data from a wide variety of sources, such as .csv files, SQL databases, Azure blob storage, and more.

In the main Power BI Desktop window, click the Home ribbon. In the External data group of the ribbon, open
the Get Data drop-down menu and click Excel .
After the dialog appears, navigate to the folder where you downloaded the example .xlsx file and select it. After
the Navigator dialogue appears, click Sheet1 , and then Edit .

Power BI will convert the timestamps in the first column to a Date/Time data type. These timestamps must be
converted to text in order to be sent to the Anomaly Detector API. If the Power Query editor doesn't
automatically open, click Edit Queries on the home tab.
Click the Transform ribbon in the Power Query Editor. In the Any Column group, open the Data Type: drop-
down menu, and select Text .
When you get a notice about changing the column type, click Replace Current . Afterwards, click Close &
Apply or Apply in the Home ribbon.

Create a function to send the data and format the response


To format and send the data file to the Anomaly Detector API, you can invoke a query on the table created
above. In the Power Query Editor, from the Home ribbon, open the New Source drop-down menu and click
Blank Quer y .
Make sure your new query is selected, then click Advanced Editor .

Within the Advanced Editor, use the following Power Query M snippet to extract the columns from the table and
send it to the API. Afterwards, the query will create a table from the JSON response, and return it. Replace the
apiKey variable with your valid Anomaly Detector API key, and endpoint with your endpoint. After you've
entered the query into the Advanced Editor, click Done .
(table as table) => let

apikey = "[Placeholder: Your Anomaly Detector resource access key]",


endpoint = "[Placeholder: Your Anomaly Detector resource
endpoint]/anomalydetector/v1.0/timeseries/entire/detect",
inputTable = Table.TransformColumnTypes(table,{{"Timestamp", type text},{"Value", type number}}),
jsontext = Text.FromBinary(Json.FromValue(inputTable)),
jsonbody = "{ ""Granularity"": ""daily"", ""Sensitivity"": 95, ""Series"": "& jsontext &" }",
bytesbody = Text.ToBinary(jsonbody),
headers = [#"Content-Type" = "application/json", #"Ocp-Apim-Subscription-Key" = apikey],
bytesresp = Web.Contents(endpoint, [Headers=headers, Content=bytesbody, ManualStatusHandling={400}]),
jsonresp = Json.Document(bytesresp),

respTable = Table.FromColumns({

Table.Column(inputTable, "Timestamp")
,Table.Column(inputTable, "Value")
, Record.Field(jsonresp, "IsAnomaly") as list
, Record.Field(jsonresp, "ExpectedValues") as list
, Record.Field(jsonresp, "UpperMargins")as list
, Record.Field(jsonresp, "LowerMargins") as list
, Record.Field(jsonresp, "IsPositiveAnomaly") as list
, Record.Field(jsonresp, "IsNegativeAnomaly") as list

}, {"Timestamp", "Value", "IsAnomaly", "ExpectedValues", "UpperMargin", "LowerMargin",


"IsPositiveAnomaly", "IsNegativeAnomaly"}
),

respTable1 = Table.AddColumn(respTable , "UpperMargins", (row) => row[ExpectedValues] +


row[UpperMargin]),
respTable2 = Table.AddColumn(respTable1 , "LowerMargins", (row) => row[ExpectedValues] -
row[LowerMargin]),
respTable3 = Table.RemoveColumns(respTable2, "UpperMargin"),
respTable4 = Table.RemoveColumns(respTable3, "LowerMargin"),

results = Table.TransformColumnTypes(

respTable4,
{{"Timestamp", type datetime}, {"Value", type number}, {"IsAnomaly", type logical},
{"IsPositiveAnomaly", type logical}, {"IsNegativeAnomaly", type logical},
{"ExpectedValues", type number}, {"UpperMargins", type number}, {"LowerMargins", type
number}}
)

in results

Invoke the query on your data sheet by selecting Sheet1 below Enter Parameter , and click Invoke .

Data source privacy and authentication


NOTE
Be aware of your organization's policies for data privacy and access. See Power BI Desktop privacy levels for more
information.

You may get a warning message when you attempt to run the query since it utilizes an external data source.

To fix this, click File , and Options and settings . Then click Options . Below Current File , select Privacy , and
Ignore the Privacy Levels and potentially improve performance .
Additionally, you may get a message asking you to specify how you want to connect to the API.

To fix this, Click Edit Credentials in the message. After the dialogue box appears, select Anonymous to
connect to the API anonymously. Then click Connect .
Afterwards, click Close & Apply in the Home ribbon to apply the changes.

Visualize the Anomaly Detector API response


In the main Power BI screen, begin using the queries created above to visualize the data. First select Line Char t
in Visualizations . Then add the timestamp from the invoked function to the line chart's Axis . Right-click on it,
and select Timestamp .

Add the following fields from the Invoked Function to the chart's Values field. Use the below screenshot to
help build your chart.
Value
UpperMargins
LowerMargins
ExpectedValues
After adding the fields, click on the chart and resize it to show all of the data points. Your chart will look similar
to the below screenshot:

Display anomaly data points


On the right side of the Power BI window, below the FIELDS pane, right-click on Value under the Invoked
Function quer y , and click New quick measure .
On the screen that appears, select Filtered value as the calculation. Set Base value to Sum of Value . Then
drag IsAnomaly from the Invoked Function fields to Filter . Select True from the Filter drop-down menu.

After clicking Ok , you will have a Value for True field, at the bottom of the list of your fields. Right-click it and
rename it to Anomaly . Add it to the chart's Values . Then select the Format tool, and set the X-axis type to
Categorical .
Apply colors to your chart by clicking on the Format tool and Data colors . Your chart should look something
like the following:

Next steps
Streaming anomaly detection with Azure Databricks
Azure Cognitive Services support and help options
3/20/2021 • 2 minutes to read • Edit Online

Are you just starting to explore the functionality of Azure Cognitive Services? Perhaps you are implementing a
new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here
are options for where you can get support, stay up-to-date, give feedback, and report bugs for Cognitive
Services.

Create an Azure support request


Explore the range of Azure support options and choose the plan that best fits, whether you're a developer just
starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure
customers can create and manage support requests in the Azure portal.
Azure portal
Azure portal for the United States government

Post a question on Microsoft Q&A


For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most
Valuable Professionals (MVPs), or our expert community, engage with us on Microsoft Q&A, Azure's preferred
destination for community support.
If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of
the following tags when you ask your question:
Cognitive Services
Vision
Computer Vision
Custom Vision
Face
Form Recognizer
Video Indexer
Language
Immersive Reader
Language Understanding (LUIS)
QnA Maker
Text Analytics
Translator
Speech
Speech service
Decision
Anomaly Detector
Content Moderator
Metrics Advisor (preview)
Personalizer

Post a question to Stack Overflow


For answers on your developer questions from the largest community developer ecosystem, ask your question
on Stack Overflow.
If you do submit a new question to Stack Overflow, please use one or more of the following tags when you
create the question:
Cognitive Services
Vision
Computer Vision
Custom Vision
Face
Form Recognizer
Video Indexer
Language
Immersive Reader
Language Understanding (LUIS)
QnA Maker
Text Analytics
Translator
Speech
Speech service
Decision
Anomaly Detector
Content Moderator
Metrics Advisor (preview)
Personalizer

Submit feedback on User Voice


To request new features, post them on UserVoice. Share your ideas for making Cognitive Services and its APIs
work better for the applications you develop.
Cognitive Services
Vision
Computer Vision
Custom Vision
Face
Form Recognizer
Video Indexer
Language
Immersive Reader
Language Understanding (LUIS)
QnA Maker
Text Analytics
Translator
Speech
Speech service
Decision
Anomaly Detector
Content Moderator
Metrics Advisor (preview)
Personalizer

Stay informed
Staying informed about features in a new release or news on the Azure blog can help you find the difference
between a programming error, a service bug, or a feature not yet available in Cognitive Services.
Learn more about product updates, roadmap, and announcements in Azure Updates.
See what Cognitive Services articles have recently been added or updated in What's new in docs?
News about Cognitive Services is shared in the Azure blog.
Join the conversation on Reddit about Cognitive Services.

Next steps
What are Azure Cognitive Services?
Featured User-generated content for the Anomaly
Detector API
3/5/2021 • 2 minutes to read • Edit Online

Use this article to discover how other customers are thinking about and using the Anomaly Detector API. The
following resources were created by the community of Anomaly Detector users. They include open-source
projects, and other contributions created by both Microsoft and third-party users. Some of the following links
are hosted on websites that are external to Microsoft and Microsoft is not responsible for the content there. Use
discretion when you refer to these resources.

Technical blogs
Trying the Cognitive Service: Anomaly Detector API (in Japanese)

Open-source projects
Jupyter notebook demonstrating Anomaly Detection and streaming to Power BI
If you'd like to nominate a resource, fill a short form. Contact AnomalyDetector@microsoft.com or raise an issue
on GitHub if you'd like us to remove the content.

You might also like