Professional Documents
Culture Documents
The first release of the Azure Cognitive Services Anomaly Detector allowed you to build metrics monitoring
solutions using the easy-to-use univariate time series Anomaly Detector APIs. By allowing analysis of time series
individually, Anomaly Detector univariate provides simplicity and scalability.
The new multivariate anomaly detection APIs further enable developers by easily integrating advanced AI
for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled
data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as
key factors. This new capability helps you to proactively protect your complex systems such as software
applications, servers, factory machines, spacecraft, or even your business, from failures.
Imagine 20 sensors from an auto engine generating 20 different signals like vibration, temperature, fuel
pressure, etc. The readings of those signals individually may not tell you much about system level issues, but
together they can represent the health of the engine. When the interaction of those signals deviates outside the
usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The
underlying AI models are trained and customized using your data such that it understands the unique needs of
your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time
series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for
complex enterprise software, or business intelligence tools.
Use multivariate anomaly detection APIs below, if your goal is to detect system level anomalies from a group of
time series data. Particularly, when any individual time series won't tell you much, and you have to look at all
signals (a group of time series) holistically to determine a system level issue. Example: you have an expensive
physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of
different types of sensors. You would have to look at all those time series signals from those sensors to decide
whether there is system level issue.
POST /anomalydetector/v1.1-preview/multivariate/models
GET /anomalydetector/v1.1-preview/multivariate/models[?$skip][&$top]
GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}
POST/anomalydetector/v1.1-preview/multivariate/models/{modelId}/detect
GET /anomalydetector/v1.1-preview/multivariate/results/{resultId}
DELETE /anomalydetector/v1.1-preview/multivariate/models/{modelId}
GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}/export
Region support
The public preview of Anomaly Detector multivariate is currently available in three regions: West US2, East US2,
and West Europe.
Algorithms
Multivariate time series Anomaly Detection via Graph Attention Network
Next steps
Quickstarts.
Best Practices: This article is about recommended patterns to use with the multivariate APIs.
What is the Anomaly Detector API?
4/18/2021 • 4 minutes to read • Edit Online
IMPORTANT
Transport Layer Security (TLS) 1.2 is now enforced for all HTTP requests to this service. For more information, see Azure
Cognitive Services security.
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data without
having to know machine learning. The Anomaly Detector API's algorithms adapt by automatically identifying
and applying the best-fitting models to your data, regardless of industry, scenario, or data volume. Using your
time series data, the API determines boundaries for anomaly detection, expected values, and which data points
are anomalies.
Using the Anomaly Detector doesn't require any prior experience in machine learning, and the RESTful API
enables you to easily integrate the service into your applications and processes.
This documentation contains the following types of articles:
The quickstarts are step-by-step instructions that let you make calls to the service and get results in a short
period of time.
The how-to guides contain instructions for using the service in more specific or customized ways.
The conceptual articles provide in-depth explanations of the service's functionality and features.
The tutorials are longer guides that show you how to use this service as a component in broader business
solutions.
Features
With the Anomaly Detector, you can automatically detect anomalies throughout your time series data, or as they
occur in real-time.
Anomaly detection in real-time. Detect anomalies in your streaming data by using previously
seen data points to determine if your latest one is an
anomaly. This operation generates a model using the data
points you send, and determines if the target point is an
anomaly. By calling the API with each new data point you
generate, you can monitor your data as it's created.
Detect anomalies throughout your data set as a batch. Use your time series to detect any anomalies that might
exist throughout your data. This operation generates a
model using your entire time series data, with each point
analyzed with the same model.
Detect change points throughout your data set as a batch. Use your time series to detect any trend change points that
exist in your data. This operation generates a model using
your entire time series data, with each point analyzed with
the same model.
Get additional information about your data. Get useful details about your data and any observed
anomalies, including expected values, anomaly boundaries,
and positions.
Adjust anomaly detection boundaries. The Anomaly Detector API automatically creates boundaries
for anomaly detection. Adjust these boundaries to increase
or decrease the API's sensitivity to data anomalies, and
better fit your data.
Demo
Check out this interactive demo to understand how Anomaly Detector works. To run the demo, you need to
create an Anomaly Detector resource and get the API key and endpoint.
Notebook
To learn how to call the Anomaly Detector API, try this Notebook. This Jupyter Notebook shows you how to send
an API request and visualize the result.
To run the Notebook, complete the following steps:
1. Get a valid Anomaly Detector API subscription key and an API endpoint. The section below has instructions
for signing up.
2. Sign in, and select Clone, in the upper right corner.
3. Uncheck the "public" option in the dialog box before completing the clone operation, otherwise your
notebook, including any subscription keys, will be public.
4. Select Run on free compute
5. Select one of the notebooks.
6. Add your valid Anomaly Detector API subscription key to the subscription_key variable.
7. Change the endpoint variable to your endpoint. For example:
https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect
8. On the top menu bar, select Cell , then Run All .
Workflow
The Anomaly Detector API is a RESTful web service, making it easy to call from any programming language that
can make HTTP requests and parse JSON.
NOTE
For best results when using the Anomaly Detector API, your JSON-formatted time series data should include:
data points separated by the same interval, with no more than 10% of the expected number of points missing.
at least 12 data points if your data doesn't have a clear seasonal pattern.
at least 4 pattern occurrences if your data does have a clear seasonal pattern.
You must have a Cognitive Services API account with access to the Anomaly Detector API. You can get your
subscription key from the Azure portal after creating your account.
After signing up:
1. Take your time series data and convert it into a valid JSON format. Use best practices when preparing your
data to get the best results.
2. Send a request to the Anomaly Detector API with your data.
3. Process the API response by parsing the returned JSON message.
Algorithms
See the following technical blogs for information about the algorithms used:
Introducing Azure Anomaly Detector API
Overview of SR-CNN algorithm in Azure Anomaly Detector
You can read the paper Time-Series Anomaly Detection Service at Microsoft (accepted by KDD 2019) to learn
more about the SR-CNN algorithms developed by Microsoft.
Next steps
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
The Anomaly Detector API online demo
The Anomaly Detector REST API reference
Quickstart: Use the Anomaly Detector multivariate
client library
4/22/2021 • 23 minutes to read • Edit Online
Get started with the Anomaly Detector multivariate client library for C#. Follow these steps to install the package
and start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for C# to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (NuGet)
Prerequisites
Azure subscription - Create one for free
The current version of .NET Core
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and select the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. Paste your key and endpoint into the code below later in the quickstart. You can
use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create a new .NET Core application
In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console
app with the name anomaly-detector-quickstart-multivariate . This command creates a simple "Hello World"
project with a single C# source file: Program.cs.
Change your directory to the newly created app folder. You can build the application with:
dotnet build
From the project directory, open the program.cs file and add the following using directives :
using System;
using System.Collections.Generic;
using System.Drawing.Text;
using System.IO;
using System.Linq;
using System.Linq.Expressions;
using System.Net.NetworkInformation;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using Azure.AI.AnomalyDetector.Models;
using Azure.Core.TestFramework;
using Microsoft.Identity.Client;
using NUnit.Framework;
In the application's main() method, create variables for your resource's Azure endpoint, your API key, and a
custom datasource.
To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:
T IM ESTA M P VA L UE
2019-04-01T00:00:00Z 5
2019-04-01T00:01:00Z 3.6
2019-04-01T00:02:00Z 4
... ...
Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.
Code examples
These code snippets show you how to do the following with the Anomaly Detector multivariate client library for
.NET:
Authenticate the client
Train the model
Detect anomalies
Export model
Delete model
if (model_info != null)
{
model_status = model_info.Status;
}
tryout_count += 1;
};
get_response = await client.GetMultivariateModelAsync(trained_model_id).ConfigureAwait(false);
if (model_status != ModelStatus.Ready)
{
Console.WriteLine(String.Format("Request timeout after {0} tryouts", max_tryout));
}
Detect anomalies
To detect anomalies using your newly trained model, create a private async Task named detectAsync . You will
create a new DetectionRequest and pass that as a parameter to DetectAnomalyAsync .
private async Task<DetectionResult> detectAsync(AnomalyDetectorClient client, string datasource, Guid
model_id, DateTimeOffset start_time, DateTimeOffset end_time, int max_tryout = 500)
{
try
{
Console.WriteLine("Start detect...");
Response<Model> get_response = await
client.GetMultivariateModelAsync(model_id).ConfigureAwait(false);
if (result.Value.Summary.Status != DetectionStatus.Ready)
{
Console.WriteLine(String.Format("Request timeout after {0} tryouts", max_tryout));
return null;
}
return result.Value;
}
catch (Exception e)
{
Console.WriteLine(String.Format("Detection error. {0}", e.Message));
throw new Exception(e.Message);
}
}
Export model
To export the model you trained previously, create a private async Task named exportAysnc . You will use
ExportModelAsync and pass the model ID of the model you wish to export.
private async Task exportAsync(AnomalyDetectorClient client, Guid model_id, string model_path = "model.zip")
{
try
{
Stream model = await client.ExportModelAsync(model_id).ConfigureAwait(false);
if (model != null)
{
var fileStream = File.Create(model_path);
model.Seek(0, SeekOrigin.Begin);
model.CopyTo(fileStream);
fileStream.Close();
}
}
catch (Exception e)
{
Console.WriteLine(String.Format("Export error. {0}", e.Message));
throw new Exception(e.Message);
}
}
Delete model
To delete a model that you have created previously use DeleteMultivariateModelAsync and pass the model ID of
the model you wish to delete. To retrieve a model ID you can us getModelNumberAsync :
Main method
Now that you have all the component parts, you need to add additional code to your main method to call your
newly created tasks.
{
//read endpoint and apiKey
string endpoint = "YOUR_API_KEY";
string apiKey = "YOUR_ENDPOINT";
string datasource = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";
Console.WriteLine(endpoint);
var endpointUri = new Uri(endpoint);
var credential = new AzureKeyCredential(apiKey);
//create client
AnomalyDetectorClient client = new AnomalyDetectorClient(endpointUri, credential);
// train
TimeSpan offset = new TimeSpan(0);
DateTimeOffset start_time = new DateTimeOffset(2021, 1, 1, 0, 0, 0, offset);
DateTimeOffset end_time = new DateTimeOffset(2021, 1, 2, 12, 0, 0, offset);
Guid? model_id_raw = null;
try
{
model_id_raw = await trainAsync(client, datasource, start_time, end_time).ConfigureAwait(false);
Console.WriteLine(model_id_raw);
Guid model_id = model_id_raw.GetValueOrDefault();
// detect
start_time = end_time;
end_time = new DateTimeOffset(2021, 1, 3, 0, 0, 0, offset);
DetectionResult result = await detectAsync(client, datasource, model_id, start_time,
end_time).ConfigureAwait(false);
if (result != null)
{
Console.WriteLine(String.Format("Result ID: {0}", result.ResultId));
Console.WriteLine(String.Format("Result summary: {0}", result.Summary));
Console.WriteLine(String.Format("Result length: {0}", result.Results.Count));
}
// export model
await exportAsync(client, model_id).ConfigureAwait(false);
// delete
await deleteAsync(client, model_id).ConfigureAwait(false);
}
catch (Exception e)
{
String msg = String.Format("Multivariate error. {0}", e.Message);
if (model_id_raw != null)
{
await deleteAsync(client, model_id_raw.GetValueOrDefault()).ConfigureAwait(false);
}
Console.WriteLine(msg);
throw new Exception(msg);
}
}
dotnet run
Next steps
Anomaly Detector multivariate best practices
Get started with the Anomaly Detector multivariate client library for JavaScript. Follow these steps to install the
package and start using the algorithms provided by the service. The new multivariate anomaly detection APIs
enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the
need for machine learning knowledge or labeled data. Dependencies and inter-correlations between different
signals are automatically counted as key factors. This helps you to proactively protect your complex systems
from failures.
Use the Anomaly Detector multivariate client library for JavaScript to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (npm) | Sample code
Prerequisites
Azure subscription - Create one for free
The current version of Node.js
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create a new Node.js application
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
Run the npm init command to create a node application with a package.json file.
npm init
'use strict'
const fs = require('fs');
const parse = require("csv-parse/lib/sync");
const { AnomalyDetectorClient } = require('@azure/ai-anomaly-detector');
const { AzureKeyCredential } = require('@azure/core-auth');
Create variables your resource's Azure endpoint and key. Create another variable for the example data file.
const apiKey = "YOUR_API_KEY";
const endpoint = "YOUR_ENDPOINT";
const data_source = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS";
To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:
T IM ESTA M P VA L UE
2019-04-01T00:00:00Z 5
2019-04-01T00:01:00Z 3.6
2019-04-01T00:02:00Z 4
... ...
Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.
Install the client library
Install the ms-rest-azure and azure-ai-anomalydetector NPM packages. The csv-parse library is also used in
this quickstart:
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Train a model
Detect anomalies
Export model
Delete model
const Modelrequest = {
source: data_source,
startTime: new Date(2021,0,1,0,0,0),
endTime: new Date(2021,0,2,12,0,0),
slidingWindow:200
};
To check if training of your model is complete you can track the model's status:
console.log("TRAINING FINISHED.")
Detect anomalies
Use the detectAnomaly and getDectectionResult functions to determine if there are any anomalies within your
datasource.
console.log("Start detecting...")
const detect_request = {
source: data_source,
startTime: new Date(2021,0,2,12,0,0),
endTime: new Date(2021,0,3,0,0,0)
};
const result_header = await client.detectAnomaly(model_id, detect_request)
const result_id = result_header.location?.split("/").pop() ?? ""
let result = await client.getDetectionResult(result_id)
let result_status = result.summary.status
Export model
To export your trained model use the exportModel function.
Delete model
To delete an existing model that is available to the current resource use the deleteMultivariateModel function.
client.deleteMultivariateModel(model_id)
console.log("New model has been deleted.")
node index.js
Next steps
Anomaly Detector multivariate best practices
Get started with the Anomaly Detector multivariate client library for Python. Follow these steps to install the
package start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for Python to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (PyPi) | Sample code
Prerequisites
Python 3.x
The Pandas data analysis library
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Install the client library
After installing Python, you can install the client libraries with:
import os
import time
from datetime import datetime
Create variables for your key as an environment variable, the path to a time series data file, and the Azure
location of your subscription.
subscription_key = "ANOMALY_DETECTOR_KEY"
anomaly_detector_endpoint = "ANOMALY_DETECTOR_ENDPOINT"
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Python:
Authenticate the client
Train the model
Detect anomalies
Export model
Delete model
T IM ESTA M P VA L UE
2019-04-01T00:00:00Z 5
T IM ESTA M P VA L UE
2019-04-01T00:01:00Z 3.6
2019-04-01T00:02:00Z 4
... ...
Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.
# <client>
self.ad_client = AnomalyDetectorClient(AzureKeyCredential(self.sub_key), self.end_point)
# </client>
if not data_source:
# Datafeed for test only
self.data_source = "YOUR_SAMPLE_ZIP_FILE_LOCATED_IN_AZURE_BLOB_STORAGE_WITH_SAS"
else:
self.data_source = data_source
print("Done.", "\n--------------------")
print("{:d} available models after training.".format(len(new_model_list)))
Detect anomalies
Use the detect_anomaly and get_dectection_result to determine if there are any anomalies within your
datasource. You will need to pass the model ID for the model that you just trained.
def detect(self, model_id, start_time, end_time, max_tryout=500):
if r.summary.status != "READY":
print("Request timeout after %d tryouts.".format(max_tryout))
return None
except HttpResponseError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
raise e
return r
Export model
If you want to export a model use export_model and pass the model ID of the model you want to export:
Delete model
To delete a model use delete_multivariate_model and pass the model ID of the model you want to delete:
# Reference
result = sample.detect(model_id, datetime(2021, 1, 2, 12, 0, 0), datetime(2021, 1, 3, 0, 0, 0))
print("Result ID:\t", result.result_id)
print("Result summary:\t", result.summary)
print("Result length:\t", len(result.results))
# Export model
sample.export_model(model_id, "model.zip")
# Delete model
sample.delete_model(model_id)
Before running it can be helpful to check your project against the full sample code that this quickstart is derived
from.
We also have an in-depth Jupyter Notebook to help you get started.
Run the application with the python command and your file name.
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI
Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector multivariate client library for Java. Follow these steps to install the
package start using the algorithms provided by the service. The new multivariate anomaly detection APIs enable
developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need
for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals
are automatically counted as key factors. This helps you to proactively protect your complex systems from
failures.
Use the Anomaly Detector multivariate client library for Java to:
Detect system level anomalies from a group of time series.
When any individual time series won't tell you much and you have to look at all signals to detect a problem.
Predicative maintenance of expensive physical assets with tens to hundreds of different types of sensors
measuring various aspects of system health.
Library source code | Package (Maven) | Sample code
Prerequisites
Azure subscription - Create one for free
The current version of the Java Development Kit(JDK)
The Gradle build tool, or another dependency manager.
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create a new Gradle project
This quickstart uses the Gradle dependency manager. You can find more client library information on the Maven
Central Repository.
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
Run the gradle init command from your working directory. This command will create essential build files for
Gradle, including build.gradle.kts which is used at runtime to create and configure your application.
dependencies {
compile("com.azure:azure-ai-anomalydetector")
}
mkdir -p src/main/java
Navigate to the new folder and create a file called MetricsAdvisorQuickstarts.java. Open it in your preferred
editor or IDE and add the following import statements:
package com.azure.ai.anomalydetector;
import com.azure.ai.anomalydetector.models.*;
import com.azure.core.credential.AzureKeyCredential;
import com.azure.core.http.*;
import com.azure.core.http.policy.*;
import com.azure.core.http.rest.PagedIterable;
import com.azure.core.http.rest.PagedResponse;
import com.azure.core.http.rest.Response;
import com.azure.core.http.rest.StreamResponse;
import com.azure.core.util.Context;
import reactor.core.publisher.Flux;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.UncheckedIOException;
import java.nio.ByteBuffer;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.time.*;
import java.time.format.DateTimeFormatter;
import java.util.Iterator;
import java.util.List;
import java.util.UUID;
import java.util.stream.Collectors;
Create variables your resource's Azure endpoint and key. Create another variable for the example data file.
To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of
multiple time series that meet the following requirements:
Each time series should be a CSV file with two (and only two) columns, "timestamp" and "value" (all in
lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be
integers or decimals with any number of decimal places. For example:
T IM ESTA M P VA L UE
2019-04-01T00:00:00Z 5
2019-04-01T00:01:00Z 3.6
2019-04-01T00:02:00Z 4
... ...
Each CSV file should be named after a different variable that will be used for model training. For example,
"temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders.
The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you
generate the blob SAS (Shared access signatures) URL for the zip file, it can be used for training. Refer to this
document for how to generate SAS URLs from Azure Blob Storage.
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Train a model
Detect anomalies
Export model
Delete model
Train a model
Construct a model result and train model
First we need to construct a model request. Make sure that start and end time align with your data source.
To use the Anomaly Detector multivariate APIs, we need to train our own model before using detection. Data
used for training is a batch of time series, each time series should be in a CSV file with only two columns,
"timestamp" and "value" (the column names should be exactly the same). Each CSV file should be named after
each variable for the time series. All of the time series should be zipped into one zip file and be uploaded to
Azure Blob storage, and there is no requirement for the zip file name. Alternatively, an extra meta.json file can be
included in the zip file if you wish the name of the variable to be different from the .zip file name. Once we
generate blob SAS (Shared access signatures) URL, we can use the url to the zip file for training.
Path path = Paths.get("test-data.csv");
List<String> requestData = Files.readAllLines(path);
List<TimeSeriesPoint> series = requestData.stream()
.map(line -> line.trim())
.filter(line -> line.length() > 0)
.map(line -> line.split(",", 2))
.filter(splits -> splits.length == 2)
.map(splits -> {
TimeSeriesPoint timeSeriesPoint = new TimeSeriesPoint();
timeSeriesPoint.setTimestamp(OffsetDateTime.parse(splits[0]));
timeSeriesPoint.setValue(Float.parseFloat(splits[1]));
return timeSeriesPoint;
})
.collect(Collectors.toList());
Integer skip = 0;
Integer top = 5;
PagedIterable<ModelSnapshot> response = anomalyDetectorClient.listMultivariateModel(skip, top);
Iterator<PagedResponse<ModelSnapshot>> ite = response.iterableByPage().iterator();
while (true) {
Response<Model> response_model = anomalyDetectorClient.getMultivariateModelWithResponse(model_id,
Context.NONE);
UUID model = response_model.getValue().getModelId();
System.out.println(response_model.getStatusCode());
System.out.println(response_model.getValue().getModelInfo().getStatus());
System.out.println(model);
if (response_model.getValue().getModelInfo().getStatus() == ModelStatus.READY) {
break;
}
}
Detect anomalies
DetectionRequest detectionRequest = new
DetectionRequest().setSource(source).setStartTime(startTime).setEndTime(endTime);
DetectAnomalyResponse detectAnomalyResponse = anomalyDetectorClient.detectAnomalyWithResponse(model_id,
detectionRequest, Context.NONE);
String result = detectAnomalyResponse.getDeserializedHeaders().getLocation();
while (true) {
DetectionResult response_result = anomalyDetectorClient.getDetectionResult(result_id);
if (response_result.getSummary().getStatus() == DetectionStatus.READY) {
break;
}
else if(response_result.getSummary().getStatus() == DetectionStatus.FAILED){
}
}
Export model
To export your trained model use the exportModelWithResponse .
Delete model
To delete an existing model that is available to the current resource use the deleteMultivariateModelWithResponse
function.
Response<Void> deleteMultivariateModelWithResponse =
anomalyDetectorClient.deleteMultivariateModelWithResponse(model_id, Context.NONE);
gradle build
gradle run
Next steps
Anomaly Detector multivariate best practices
Quickstart: Use the Anomaly Detector client library
3/5/2021 • 23 minutes to read • Edit Online
Get started with the Anomaly Detector client library for C#. Follow these steps to install the package start using
the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in your
time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or data
volume.
Use the Anomaly Detector client library for C# to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (NuGet) | Find the code on GitHub
Prerequisites
Azure subscription - Create one for free
The current version of .NET Core
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create an environment variable
NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.
Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com
After you add the environment variable, restart the console window.
Create a new .NET Core application
In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console
app with the name anomaly-detector-quickstart . This command creates a simple "Hello World" project with a
single C# source file: Program.cs.
Change your directory to the newly created app folder. You can build the application with:
dotnet build
...
Build succeeded.
0 Warning(s)
0 Error(s)
...
From the project directory, open the program.cs file and add the following using directives :
using System;
using System.IO;
using System.Text;
using System.Linq;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.CognitiveServices.AnomalyDetector;
using Microsoft.Azure.CognitiveServices.AnomalyDetector.Models;
In the application's main() method, create variables for your resource's Azure location, and your key as an
environment variable. If you created the environment variable after application is launched, the editor, IDE, or
shell running it will need to be closed and reloaded to access the variable.
static void Main(string[] args){
//This sample assumes you have created an environment variable for your key and endpoint
string endpoint = Environment.GetEnvironmentVariable("ANOMALY_DETECTOR_ENDPOINT");
string key = Environment.GetEnvironmentVariable("ANOMALY_DETECTOR_KEY");
string datapath = "request-data.csv";
Request request = GetSeriesFromFile(datapath); // The request payload with points from the data file
Object model
The Anomaly Detector client is a AnomalyDetectorClient object that authenticates to Azure using
ApiKeyServiceClientCredentials, which contains your key. The client can do anomaly detection on an entire
dataset using EntireDetectAsync(), or on the latest data point using LastDetectAsync(). The
ChangePointDetectAsync method detects points that mark changes in a trend.
Time series data is sent as a series of Points in a Request object. The Request object contains properties to
describe the data (Granularity for example), and parameters for the anomaly detection.
The Anomaly Detector response is either an EntireDetectResponse, LastDetectResponse, or
changePointDetectResponse object, depending on the method used.
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for .NET:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set
if (result.IsAnomaly.Contains(true))
{
Console.WriteLine("An anomaly was detected at index:");
for (int i = 0; i < request.Series.Count; ++i)
{
if (result.IsAnomaly[i])
{
Console.Write(i);
Console.Write(" ");
}
}
Console.WriteLine();
}
else
{
Console.WriteLine(" No anomalies detected in the series.");
}
}
Detect the anomaly status of the latest data point
Create a method to call the client's LastDetectAsync() method with the Request object and await the response
as a LastDetectResponse object. Check the response's IsAnomaly attribute to determine if the latest data point
sent was an anomaly or not.
if (result.IsAnomaly)
{
Console.WriteLine("The latest point was detected as an anomaly.");
}
else
{
Console.WriteLine("The latest point was not detected as an anomaly.");
}
}
if (result.IsChangePoint.Contains(true))
{
Console.WriteLine("A change point was detected at index:");
for (int i = 0; i < request.Series.Count; ++i)
{
if (result.IsChangePoint[i])
{
Console.Write(i);
Console.Write(" ");
}
}
Console.WriteLine();
}
else
{
Console.WriteLine("No change point detected in the series.");
}
}
dotnet run
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI
Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector client library for JavaScript. Follow these steps to install the package start
using the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in
your time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or
data volume.
Use the Anomaly Detector client library for JavaScript to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (npm) | Find the code on GitHub
Prerequisites
Azure subscription - Create one for free
The current version of Node.js
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create an environment variable
NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.
Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com
After you add the environment variable, restart the console window.
Create a new Node.js application
In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app, and navigate to it.
Run the npm init command to create a node application with a package.json file.
npm init
'use strict'
const fs = require('fs');
const parse = require("csv-parse/lib/sync");
const { AnomalyDetectorClient } = require('@azure/ai-anomaly-detector');
const { AzureKeyCredential } = require('@azure/core-auth');
Create variables your resource's Azure endpoint and key. If you created the environment variable after you
launched the application, you will need to close and reopen the editor, IDE, or shell running it to access the
variable. Create another variable for the example data file you will download in a later step, and an empty list for
the data points. Then create a ApiKeyCredentials object to contain the key.
// Authentication variables
// Add your Anomaly Detector subscription key and endpoint to your environment variables.
let key = process.env['ANOMALY_DETECTOR_KEY'];
let endpoint = process.env['ANOMALY_DETECTOR_ENDPOINT'];
Object model
The Anomaly Detector client is an AnomalyDetectorClient object that authenticates to Azure using your key. The
client can do anomaly detection on an entire dataset using entireDetect(), or on the latest data point using
LastDetect(). The ChangePointDetectAsync method detects points that mark changes in a trend.
Time series data is sent as series of Points in a Request object. The Request object contains properties to
describe the data (Granularity for example), and parameters for the anomaly detection.
The Anomaly Detector response is a LastDetectResponse, EntireDetectResponse, or ChangePointDetectResponse
object depending on the method used.
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Node.js:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set
function readFile() {
let input = fs.readFileSync(CSV_FILE).toString();
let parsed = parse(input, { skip_empty_lines: true });
parsed.forEach(function (e) {
points.push({ timestamp: new Date(e[0]), value: parseFloat(e[1]) });
});
}
readFile()
Detect anomalies in the entire data set
Call the API to detect anomalies through the entire time series as a batch with the client's entireDetect() method.
Store the returned EntireDetectResponse object. Iterate through the response's isAnomaly list, and print the
index of any true values. These values correspond to the index of anomalous data points, if any were found.
}
batchCall()
node index.js
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI
Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
Get started with the Anomaly Detector client library for Python. Follow these steps to install the package start
using the algorithms provided by the service. The Anomaly Detector service enables you to find abnormalities in
your time series data by automatically using the best-fitting models on it, regardless of industry, scenario, or
data volume.
Use the Anomaly Detector client library for Python to:
Detect anomalies throughout your time series data set, as a batch request
Detect the anomaly status of the latest data point in your time series
Detect trend change points in your data set.
Library reference documentation | Library source code | Package (PyPi) | Find the sample code on GitHub
Prerequisites
Python 3.x
The Pandas data analysis library
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and click the Go to resource button.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll paste your key and endpoint into the code below later in the quickstart.
You can use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
Setting up
Create an environment variable
NOTE
The endpoints for non-trial resources created after July 1, 2019 use the custom subdomain format shown below. For
more information and a complete list of regional endpoints, see Custom subdomain names for Cognitive Services.
Using your key and endpoint from the resource you created, create two environment variables for
authentication:
ANOMALY_DETECTOR_KEY - The resource key for authenticating your requests.
ANOMALY_DETECTOR_ENDPOINT - The resource endpoint for sending API requests. It will look like this:
https://<your-custom-subdomain>.api.cognitive.microsoft.com
After you add the environment variable, restart the console window.
Create a new python application
Create a new Python file and import the following libraries.
import os
from azure.ai.anomalydetector import AnomalyDetectorClient
from azure.ai.anomalydetector.models import DetectRequest, TimeSeriesPoint, TimeGranularity, \
AnomalyDetectorError
from azure.core.credentials import AzureKeyCredential
import pandas as pd
Create variables for your key as an environment variable, the path to a time series data file, and the Azure
location of your subscription. For example, westus2 .
SUBSCRIPTION_KEY = os.environ["ANOMALY_DETECTOR_KEY"]
ANOMALY_DETECTOR_ENDPOINT = os.environ["ANOMALY_DETECTOR_ENDPOINT"]
TIME_SERIES_DATA_PATH = os.path.join("./sample_data", "request-data.csv")
Object model
The Anomaly Detector client is a AnomalyDetectorClient object that authenticates to Azure using your key. The
client can do anomaly detection an entire dataset using detect_entire_series, or on the latest data point using
detect_last_point. The detect_change_point function detects points that mark changes in a trend.
Time series data is sent as a series of TimeSeriesPoints object. The DetectRequest object contains properties to
describe the data TimeGranularity for example, and parameters for the anomaly detection.
The Anomaly Detector response is a LastDetectResponse, EntireDetectResponse, or ChangePointDetectResponse
object depending on the method used.
Code examples
These code snippets show you how to do the following with the Anomaly Detector client library for Python:
Authenticate the client
Load a time series data set from a file
Detect anomalies in the entire data set
Detect the anomaly status of the latest data point
Detect the change points in the data set
series = []
data_file = pd.read_csv(TIME_SERIES_DATA_PATH, header=None, encoding='utf-8', parse_dates=[0])
for index, row in data_file.iterrows():
series.append(TimeSeriesPoint(timestamp=row[0], value=row[1]))
Create a DetectRequest object with your time series, and the TimeGranularity (or periodicity) of its data points.
For example, TimeGranularity.daily .
try:
response = client.detect_entire_series(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)
if any(response.is_anomaly):
print('An anomaly was detected at index:')
for i, value in enumerate(response.is_anomaly):
if value:
print(i)
else:
print('No anomalies were detected in the time series.')
try:
response = client.detect_last_point(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)
if response.is_anomaly:
print('The latest point is detected as anomaly.')
else:
print('The latest point is not detected as anomaly.')
try:
response = client.detect_change_point(request)
except AnomalyDetectorError as e:
print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message))
except Exception as e:
print(e)
if any(response.is_change_point):
print('An change point was detected at index:')
for i, value in enumerate(response.is_change_point):
if value:
print(i)
else:
print('No change point were detected in the time series.')
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI
Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
In this quickstart, you learn how to detect anomalies in a batch of time series data using the Anomaly Detector
service and cURL.
For a high-level look at Anomaly Detector concepts, see the overview article.
Prerequisites
Azure subscription - Create one for free
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint. Wait for it to deploy and select the Go to resource button.
You will need the key and endpoint address from the resource you create to use the REST API. You can
use the free pricing tier ( F0 ) to try the service, and upgrade later to a paid tier for production.
curl -v -X POST
"https://{endpointresourcename.cognitive.microsoft.com}/anomalydetector/v1.0/timeseries/entire/detect"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"
-d "@{path_to_file.json}"
If you used the sample data from the pre-requisites, you should receive a response 200 with the following
results:
{
"expectedValues": [
827.7940908243968,
798.9133774671927,
888.6058431807189,
900.5606407986661,
962.8389426378304,
933.2591606306954,
891.0784104799666,
856.1781601363697,
809.8987227908941,
807.375129007505,
764.3196682448518,
803.933498594564,
823.5900620883058,
794.0905641334288,
883.164245249282,
883.164245249282,
894.8419000690953,
956.8430591101258,
927.6285055190114,
885.812983784303,
851.6424797402517,
806.0927886943216,
804.6826815312029,
762.74070738882,
804.0251702513732,
825.3523662579559,
798.0404188724976,
889.3016505577698,
902.4226124345937,
965.867078532635,
937.3200495736695,
896.1720524711102,
862.0087368413656,
816.4662342097423,
814.4297745524709,
771.8614479159354,
811.859271346729,
831.8998279215521,
802.947544797165,
892.5684407435083,
904.5488214533809,
966.8527063844707,
937.3168391003043,
895.180003672544,
860.3649596356635,
814.1707285969043,
811.9054862686213,
769.1083769610742,
809.2328084659704
],
"upperMargins": [
41.389704541219835,
39.94566887335964,
44.43029215903594,
45.02803203993331,
48.14194713189152,
46.66295803153477,
44.55392052399833,
42.808908006818484,
40.494936139544706,
40.36875645037525,
38.215983412242586,
40.196674929728196,
41.17950310441529,
39.70452820667144,
44.1582122624641,
44.74209500345477,
47.84215295550629,
46.38142527595057,
44.290649189215145,
42.58212398701258,
40.30463943471608,
40.234134076560146,
38.137035369441,
40.201258512568664,
41.267618312897795,
39.90202094362488,
44.46508252788849,
45.121130621729684,
48.29335392663175,
46.86600247868348,
44.80860262355551,
43.100436842068284,
40.82331171048711,
40.721488727623544,
40.721488727623544,
38.593072395796774,
40.59296356733645,
41.5949913960776,
40.14737723985825,
44.62842203717541,
45.227441072669045,
48.34263531922354,
46.86584195501521,
44.759000183627194,
43.01824798178317,
40.70853642984521,
40.59527431343106,
38.45541884805371,
40.46164042329852
],
"lowerMargins": [
41.389704541219835,
39.94566887335964,
44.43029215903594,
45.02803203993331,
48.14194713189152,
46.66295803153477,
44.55392052399833,
42.808908006818484,
40.494936139544706,
40.36875645037525,
38.215983412242586,
40.196674929728196,
41.17950310441529,
39.70452820667144,
44.1582122624641,
44.74209500345477,
47.84215295550629,
46.38142527595057,
44.290649189215145,
42.58212398701258,
40.30463943471608,
40.234134076560146,
38.137035369441,
40.201258512568664,
41.267618312897795,
39.90202094362488,
44.46508252788849,
45.121130621729684,
48.29335392663175,
46.86600247868348,
44.80860262355551,
43.100436842068284,
40.82331171048711,
40.721488727623544,
38.593072395796774,
40.59296356733645,
41.5949913960776,
40.14737723985825,
44.62842203717541,
45.227441072669045,
48.34263531922354,
46.86584195501521,
44.759000183627194,
43.01824798178317,
40.70853642984521,
40.59527431343106,
38.45541884805371,
40.46164042329852
],
"isAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"isPositiveAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
true,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"isNegativeAnomaly": [
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false,
false
],
"period": 12
}
Clean up resources
If you want to clean up and remove a Cognitive Services subscription, you can delete the resource or resource
group. Deleting the resource group also deletes any other resources associated with the resource group.
Portal
Azure CLI
Next steps
Concepts:
What is the Anomaly Detector API?
Anomaly detection methods
Best practices when using the Anomaly Detector API.
Tutorials:
Visualize anomalies as a batch using Power BI
Anomaly detection on streaming data using Azure Databricks
How to: Use the Anomaly Detector API on your
time series data
3/5/2021 • 2 minutes to read • Edit Online
The Anomaly Detector API provides two methods of anomaly detection. You can either detect anomalies as a
batch throughout your times series, or as your data is generated by detecting the anomaly status of the latest
data point. The detection model returns anomaly results along with each data point's expected value, and the
upper and lower anomaly detection boundaries. you can use these values to visualize the range of normal
values, and anomalies in the data.
NOTE
The following request URLs must be combined with the appropriate endpoint for your subscription. For example:
https://<your-custom-
subdomain>.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/entire/detect
Batch detection
To detect anomalies throughout a batch of data points over a given time range, use the following request URI
with your time series data:
/timeseries/entire/detect .
By sending your time series data at once, the API will generate a model using the entire series, and analyze each
data point with it.
Streaming detection
To continuously detect anomalies on streaming data, use the following request URI with your latest data point:
/timeseries/last/detect' .
By sending new data points as you generate them, you can monitor your data in real time. A model will be
generated with the data points you send, and the API will determine if the latest point in the time series is an
anomaly.
B O UN DA RY C A L C UL AT IO N
Next Steps
What is the Anomaly Detector API?
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
Deploy an Anomaly Detector module to IoT Edge
3/5/2021 • 3 minutes to read • Edit Online
Learn how to deploy the Cognitive Services Anomaly Detector module to an IoT Edge device. Once it's deployed
into IoT Edge, the module runs in IoT Edge together with other modules as container instances. It exposes the
exact same APIs as an Anomaly Detector container instance running in a standard docker container
environment.
Prerequisites
Use an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
Install the Azure CLI.
An IoT Hub and an IoT Edge device.
SET T IN G VA L UE
4. Click Create and wait for the resource to be created. After it is created, navigate to the resource page
5. Collect configured endpoint and an API key:
K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE
http://<your-edge-device-ipaddress>:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.
Next steps
Review Install and run containers for pulling the container image and run the container
Review Configure containers for configuration settings
Learn more about Anomaly Detector API service
Install and run Docker containers for the Anomaly
Detector API
3/5/2021 • 9 minutes to read • Edit Online
NOTE
The container image location has recently changed. Read this article to see the updated location for this container.
Containers enable you to use the Anomaly Detector API your own environment. Containers are great for specific
security and data governance requirements. In this article you'll learn how to download, install, and run an
Anomaly Detector container.
Anomaly Detector offers a single Docker container for using the API on-premises. Use the container to:
Use the Anomaly Detector's algorithms on your data
Monitor streaming data, and detect anomalies as they occur in real-time.
Detect anomalies throughout your data set as a batch.
Detect trend change points in your data set as a batch.
Adjust the anomaly detection algorithm's sensitivity to better fit your data.
For detailed information about the API, please see:
Learn more about Anomaly Detector API service
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
You must meet the following prerequisites before using Anomaly Detector containers:
Docker Engine You need the Docker Engine installed on a host computer.
Docker provides packages that configure the Docker
environment on macOS, Windows, and Linux. For a primer
on Docker and container basics, see the Docker overview.
Familiarity with Docker You should have a basic understanding of Docker concepts,
like registries, repositories, containers, and container images,
as well as knowledge of basic docker commands.
REQ UIRED P URP O SE
Anomaly Detector resource In order to use these containers, you must have:
The Endpoint URI value is available on the Azure portal Overview page of the corresponding Cognitive Service
resource. Navigate to the Overview page, hover over the Endpoint, and a Copy to clipboard icon will appear.
Copy and use where needed.
Keys {API_KEY}
This key is used to start the container, and is available on the Azure portal's Keys page of the corresponding
Cognitive Service resource. Navigate to the Keys page, and click on the Copy to clipboard icon.
IMPORTANT
These subscription keys are used to access your Cognitive Service API. Do not share your keys. Store them securely, for
example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to
make an API call. When regenerating the first key, you can use the second key for continued access to the service.
C O N TA IN ER REP O SITO RY
cognitive-services-anomaly-detector mcr.microsoft.com/azure-cognitive-
services/decision/anomaly-detector:latest
TIP
You can use the docker images command to list your downloaded container images. For example, the following command
lists the ID, repository, and tag of each downloaded container image, formatted as a table:
This command:
Runs an Anomaly Detector container from the container image
Allocates one CPU core and 4 gigabytes (GB) of memory
Exposes TCP port 5000 and allocates a pseudo-TTY for the container
Automatically removes the container after it exits. The container image is still available on the host computer.
IMPORTANT
The Eula , Billing , and ApiKey options must be specified to run the container; otherwise, the container won't start.
For more information, see Billing.
http://localhost:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.
Troubleshooting
If you run the container with an output mount and logging enabled, the container generates log files that are
helpful to troubleshoot issues that happen while starting or running the container.
TIP
For more troubleshooting information and guidance, see Cognitive Services containers frequently asked questions (FAQ).
Billing
The Anomaly Detector containers send billing information to Azure, using an Anomaly Detector resource on
your Azure account.
Queries to the container are billed at the pricing tier of the Azure resource that's used for the ApiKey .
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing
endpoint. You must enable the containers to communicate billing information with the billing endpoint at all
times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed,
to Microsoft.
Connect to Azure
The container needs the billing argument values to run. These values allow the container to connect to the
billing endpoint. The container reports usage about every 10 to 15 minutes. If the container doesn't connect to
Azure within the allowed time window, the container continues to run but doesn't serve queries until the billing
endpoint is restored. The connection is attempted 10 times at the same time interval of 10 to 15 minutes. If it
can't connect to the billing endpoint within the 10 tries, the container stops serving requests. See the Cognitive
Services container FAQ for an example of the information sent to Microsoft for billing.
Billing arguments
The docker run command will start the container when all three of the following options are provided with
valid values:
O P T IO N DESC RIP T IO N
ApiKey The API key of the Cognitive Services resource that's used to
track billing information.
The value of this option must be set to an API key for the
provisioned resource that's specified in Billing .
Eula Indicates that you accepted the license for the container.
The value of this option must be set to accept .
Summary
In this article, you learned concepts and workflow for downloading, installing, and running Anomaly Detector
containers. In summary:
Anomaly Detector provides one Linux container for Docker, encapsulating anomaly detection with batch vs
streaming, expected range inference, and sensitivity tuning.
Container images are downloaded from a private Azure Container Registry dedicated for containers.
Container images run in Docker.
You can use either the REST API or SDK to call operations in Anomaly Detector containers by specifying the
host URI of the container.
You must specify billing information when instantiating a container.
IMPORTANT
Cognitive Services containers are not licensed to run without being connected to Azure for metering. Customers need to
enable the containers to communicate billing information with the metering service at all times. Cognitive Services
containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft.
Next steps
Review Configure containers for configuration settings
Deploy an Anomaly Detector container to Azure Container Instances
Learn more about Anomaly Detector API service
Configure Anomaly Detector containers
3/5/2021 • 8 minutes to read • Edit Online
The Anomaly Detector container runtime environment is configured using the docker run command
arguments. This container has several required settings, along with a few optional settings. Several examples of
the command are available. The container-specific settings are the billing settings.
Configuration settings
This container has the following configuration settings:
IMPORTANT
The ApiKey , Billing , and Eula settings are used together, and you must provide valid values for all three of them;
otherwise your container won't start. For more information about using these configuration settings to instantiate a
container, see Billing.
Example:
InstrumentationKey=123456789
Eula setting
The Eula setting indicates that you've accepted the license for the container. You must specify a value for this
configuration setting, and the value must be set to accept .
Example:
Eula=accept
Cognitive Services containers are licensed under your agreement governing your use of Azure. If you do not
have an existing agreement governing your use of Azure, you agree that your agreement governing use of
Azure is the Microsoft Online Subscription Agreement, which incorporates the Online Services Terms. For
previews, you also agree to the Supplemental Terms of Use for Microsoft Azure Previews. By using the container
you agree to these terms.
Fluentd settings
Fluentd is an open-source data collector for unified logging. The Fluentd settings manage the container's
connection to a Fluentd server. The container includes a Fluentd logging provider, which allows your container to
write logs and, optionally, metric data to a Fluentd server.
The following table describes the configuration settings supported under the Fluentd section.
Logging settings
The Logging settings manage ASP.NET Core logging support for your container. You can use the same
configuration settings and values for your container that you use for an ASP.NET Core application.
The following logging providers are supported by the container:
P RO VIDER P URP O SE
Disk The JSON logging provider. This logging provider writes log
data to the output mount.
This container command stores logging information in the JSON format to the output mount:
This container command shows debugging information, prefixed with dbug , while the container is running:
Disk logging
The Disk logging provider supports the following configuration settings:
For more information about configuring ASP.NET Core logging support, see Settings file configuration.
Mount settings
Use bind mounts to read and write data to and from the container. You can specify an input mount or output
mount by specifying the --mount option in the docker run command.
The Anomaly Detector containers don't use input or output mounts to store training or service data.
The exact syntax of the host mount location varies depending on the host operating system. Additionally, the
host computer's mount location may not be accessible due to a conflict between permissions used by the
Docker service account and the host mount location permissions.
Example:
--mount
type=bind,src=c:\output,target=/output
P L A C EH O L DER VA L UE F O RM AT O R EXA M P L E
{ENDPOINT_URI} The billing endpoint value is available See gathering required parameters for
on the Azure Anomaly Detector explicit examples.
Overview page.
NOTE
New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of
regional endpoints, see Custom subdomain names for Cognitive Services.
IMPORTANT
The Eula , Billing , and ApiKey options must be specified to run the container; otherwise, the container won't start.
For more information, see Billing. The ApiKey value is the Key from the Azure Anomaly Detector Resource keys page.
Next steps
Deploy an Anomaly Detector container to Azure Container Instances
Learn more about Anomaly Detector API service
Deploy an Anomaly Detector container to Azure
Container Instances
3/5/2021 • 4 minutes to read • Edit Online
Learn how to deploy the Cognitive Services Anomaly Detector container to Azure Container Instances. This
procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated
container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using
containers can shift the developers' attention away from managing infrastructure to instead focusing on
application development.
Prerequisites
Use an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
Install the Azure CLI (az).
Docker engine and validate that the Docker CLI works in a console window.
SET T IN G VA L UE
4. Click Create and wait for the resource to be created. After it is created, navigate to the resource page
5. Collect configured endpoint and an API key:
K EY S A N D EN DP O IN T TA B IN T H E
P O RTA L SET T IN G VA L UE
apiVersion: 2018-10-01
location: # < Valid location >
name: # < Container Group name >
properties:
imageRegistryCredentials: # This is only required if you are pulling a non-public image that requires
authentication to access. For example Text Analytics for health.
- server: containerpreview.azurecr.io
username: # < The username for the preview container registry >
password: # < The password for the preview container registry >
containers:
- name: # < Container name >
properties:
image: # < Repository/Image name >
environmentVariables: # These env vars are required
- name: eula
value: accept
- name: billing
value: # < Service specific Endpoint URL >
- name: apikey
value: # < Service specific API key >
resources:
requests:
cpu: 4 # Always refer to recommended minimal resources
memoryInGb: 8 # Always refer to recommended minimal resources
ports:
- port: 5000
osType: Linux
volumes: # This node, is only required for container instances that pull their model in at runtime, such
as LUIS.
- name: aci-file-share
azureFile:
shareName: # < File share name >
storageAccountName: # < Storage account name>
storageAccountKey: # < Storage account key >
restartPolicy: OnFailure
ipAddress:
type: Public
ports:
- protocol: tcp
port: 5000
tags: null
type: Microsoft.ContainerInstance/containerGroups
NOTE
Not all locations have the same CPU and Memory availability. Refer to the location and resources table for the listing of
available resources for containers per location and OS.
We'll rely on the YAML file we created for the az container create command. From the Azure CLI, execute the
az container create command replacing the <resource-group> with your own. Additionally, for securing values
within a YAML deployment refer to secure values.
The output of the command is Running... if valid, after sometime the output changes to a JSON string
representing the newly created ACI resource. The container image is more than likely not be available for a
while, but the resource is now deployed.
TIP
Pay close attention to the locations of public preview Azure Cognitive Service offerings, as the YAML will needed to be
adjusted accordingly to match the location.
http://localhost:5000/status Also requested with GET, this verifies if the api-key used to
start the container is valid without causing an endpoint
query. This request can be used for Kubernetes liveness and
readiness probes.
Azure Cognitive Services provides a layered security model. This model enables you to secure your Cognitive
Services accounts to a specific subset of networks. When network rules are configured, only applications
requesting data over the specified set of networks can access the account. You can limit access to your resources
with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of
subnets in Azure Virtual Networks.
An application that accesses a Cognitive Services resource when network rules are in effect requires
authorization. Authorization is supported with Azure Active Directory (Azure AD) credentials or with a valid API
key.
IMPORTANT
Turning on firewall rules for your Cognitive Services account blocks incoming requests for data by default. In order to
allow requests through, one of the following conditions needs to be met:
The request should originate from a service operating within an Azure Virtual Network (VNet) on the
allowed subnet list of the target Cognitive Services account. The endpoint in requests originated from
VNet needs to be set as the custom subdomain of your Cognitive Services account.
Or the request should originate from an allowed list of IP addresses.
Requests that are blocked include those from other Azure services, from the Azure portal, from logging and
metrics services, and so on.
NOTE
This article has been updated to use the Azure Az PowerShell module. The Az PowerShell module is the recommended
PowerShell module for interacting with Azure. To get started with the Az PowerShell module, see Install Azure PowerShell.
To learn how to migrate to the Az PowerShell module, see Migrate Azure PowerShell from AzureRM to Az.
Scenarios
To secure your Cognitive Services resource, you should first configure a rule to deny access to traffic from all
networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from
specific VNets. This configuration enables you to build a secure network boundary for your applications. You can
also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections
from specific internet or on-premises clients.
Network rules are enforced on all network protocols to Azure Cognitive Services, including REST and
WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be
configured. You can apply network rules to existing Cognitive Services resources, or when you create new
Cognitive Services resources. Once network rules are applied, they're enforced for all requests.
NOTE
If you're using LUIS or Speech Services, the CognitiveSer vicesManagement tag only enables you use the service using
the SDK or REST API. To access and use LUIS portal and/or Speech Studio from a virtual network, you will need to use the
following tags:
AzureActiveDirector y
AzureFrontDoor.Frontend
AzureResourceManager
CognitiveSer vicesManagement
WARNING
Making changes to network rules can impact your applications' ability to connect to Azure Cognitive Services. Setting the
default network rule to deny blocks all access to the data unless specific network rules that grant access are also applied.
Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If
you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses
from your on-premises network.
NOTE
Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory
tenant are currently only supported through Powershell, CLI and REST APIs. Such rules cannot be configured through the
Azure portal, though they may be viewed in the portal.
Azure portal
PowerShell
Azure CLI
5. Select the Vir tual networks and Subnets options, and then select Enable .
6. To create a new virtual network and grant it access, select Add new vir tual network .
7. Provide the information necessary to create the new virtual network, and then select Create .
NOTE
If a service endpoint for Azure Cognitive Services wasn't previously configured for the selected virtual network and
subnets, you can configure it as part of this operation.
Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection
during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use
Powershell, CLI or REST APIs.
8. To remove a virtual network or subnet rule, select ... to open the context menu for the virtual network or
subnet, and select Remove .
9. Select Save to apply your changes.
IMPORTANT
Be sure to set the default rule to deny , or network rules have no effect.
TIP
Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using
individual IP address rules.
IP network rules are only allowed for public internet IP addresses. IP address ranges reserved for private
networks (as defined in RFC 1918) aren't allowed in IP rules. Private networks include addresses that start with
10.* , 172.16.* - 172.31.* , and 192.168.* .
Only IPV4 addresses are supported at this time. Each Cognitive Services resource supports up to 100 IP network
rules, which may be combined with Virtual network rules.
Configuring access from on-premises networks
To grant access from your on-premises networks to your Cognitive Services resource with an IP network rule,
you must identify the internet facing IP addresses used by your network. Contact your network administrator
for help.
If you're using ExpressRoute on-premises for public peering or Microsoft peering, you'll need to identify the NAT
IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied
to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the
NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow
access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To
find your public peering ExpressRoute circuit IP addresses, open a support ticket with ExpressRoute via the
Azure portal. Learn more about NAT for ExpressRoute public and Microsoft peering.
Managing IP network rules
You can manage IP network rules for Cognitive Services resources through the Azure portal, PowerShell, or the
Azure CLI.
Azure portal
PowerShell
Azure CLI
5. To remove an IP network rule, select the trash can icon next to the address range.
6. Select Save to apply your changes.
IMPORTANT
Be sure to set the default rule to deny , or network rules have no effect.
TIP
When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Cognitive
Services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating
the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and
adding the DNS A records.
For more information on configuring your own DNS server to support private endpoints, refer to the following
articles:
Name resolution for resources in Azure virtual networks
DNS configuration for private endpoints
Pricing
For pricing details, see Azure Private Link pricing.
Next steps
Explore the various Azure Cognitive Services
Learn more about Azure Virtual Network Service Endpoints
Authenticate requests to Azure Cognitive Services
3/5/2021 • 8 minutes to read • Edit Online
Each request to an Azure Cognitive Service must include an authentication header. This header passes along a
subscription key or access token, which is used to validate your subscription for a service or group of services.
In this article, you'll learn about three ways to authenticate a request and the requirements for each.
Authenticate with a single-service or multi-service subscription key
Authenticate with a token
Authenticate with Azure Active Directory (AAD)
Prerequisites
Before you make a request, you need an Azure account and an Azure Cognitive Services subscription. If you
already have an account, go ahead and skip to the next section. If you don't have an account, we have a guide to
get you set up in minutes: Create a Cognitive Services account for Azure.
You can get your subscription key from the Azure portal after creating your account.
Authentication headers
Let's quickly review the authentication headers available for use with Azure Cognitive Services.
Authorization Use this header if you are using an authentication token. The
steps to perform a token exchange are detailed in the
following sections. The value provided follows this format:
Bearer <TOKEN> .
This option also uses a subscription key to authenticate requests. The main difference is that a subscription key
is not tied to a specific service, rather, a single key can be used to authenticate requests for multiple Cognitive
Services. See Cognitive Services pricing for information about regional availability, supported features, and
pricing.
The subscription key is provided in each request as the Ocp-Apim-Subscription-Key header.
Supported regions
When using the multi-service subscription key to make a request to api.cognitive.microsoft.com , you must
include the region in the URL. For example: westus.api.cognitive.microsoft.com .
When using multi-service subscription key with the Translator service, you must specify the subscription region
with the Ocp-Apim-Subscription-Region header.
Multi-service authentication is supported in these regions:
australiaeast
brazilsouth
canadacentral
centralindia
eastasia
eastus
japaneast
northeurope
southcentralus
southeastasia
uksouth
westcentralus
westeurope
westus
westus2
Sample requests
This is a sample call to the Bing Web Search API:
NOTE
QnA Maker also uses the Authorization header, but requires an endpoint key. For more information, see QnA Maker: Get
answer from knowledge base.
WARNING
The services that support authentication tokens may change over time, please check the API reference for a service
before using this authentication method.
Both single service and multi-service subscription keys can be exchanged for authentication tokens.
Authentication tokens are valid for 10 minutes.
Authentication tokens are included in a request as the Authorization header. The token value provided must be
preceded by Bearer , for example: Bearer YOUR_AUTH_TOKEN .
Sample requests
Use this URL to exchange a subscription key for an authentication token:
https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken .
curl -v -X POST \
"https://YOUR-REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
-H "Content-type: application/x-www-form-urlencoded" \
-H "Content-length: 0" \
-H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"
After you get an authentication token, you'll need to pass it in each request as the Authorization header. This is
a sample call to the Translator service:
In the previous sections, we showed you how to authenticate against Azure Cognitive Services using either a
single-service or multi-service subscription key. While these keys provide a quick and easy path to start
development, they fall short in more complex scenarios that require Azure role-based access control (Azure
RBAC). Let's take a look at what's required to authenticate using Azure Active Directory (AAD).
In the following sections, you'll use either the Azure Cloud Shell environment or the Azure CLI to create a
subdomain, assign roles, and obtain a bearer token to call the Azure Cognitive Services. If you get stuck, links
are provided in each section with all available options for each command in Azure Cloud Shell/Azure CLI.
Create a resource with a custom subdomain
The first step is to create a custom subdomain. If you want to use an existing Cognitive Services resource which
does not have custom subdomain name, follow the instructions in Cognitive Services Custom Subdomains to
enable custom subdomain for your resource.
1. Start by opening the Azure Cloud Shell. Then select a subscription:
2. Next, create a Cognitive Services resource with a custom subdomain. The subdomain name needs to be
globally unique and cannot include special characters, such as: ".", "!", ",".
3. If successful, the Endpoint should show the subdomain name unique to your resource.
Assign a role to a service principal
Now that you have a custom subdomain associated with your resource, you're going to need to assign a role to
a service principal.
NOTE
Keep in mind that Azure role assignments may take up to five minutes to propagate.
NOTE
If you register an application in the Azure portal, this step is completed for you.
3. The last step is to assign the "Cognitive Services User" role to the service principal (scoped to the
resource). By assigning a role, you're granting service principal access to this resource. You can grant the
same service principal access to multiple resources in your subscription.
NOTE
The ObjectId of the service principal is used, not the ObjectId for the application. The ACCOUNT_ID will be the
Azure resource Id of the Cognitive Services account you created. You can find Azure resource Id from "properties"
of the resource in Azure portal.
Sample request
In this sample, a password is used to authenticate the service principal. The token provided is then used to call
the Computer Vision API.
1. Get your TenantId :
$context=Get-AzContext
$context.Tenant.Id
2. Get a token:
NOTE
If you're using Azure Cloud Shell, the SecureClientSecret class isn't available.
PowerShell
Azure Cloud Shell
$url = $account.Endpoint+"vision/v1.0/models"
$result = Invoke-RestMethod -Uri $url -Method Get -Headers
@{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
$result | ConvertTo-Json
Alternatively, the service principal can be authenticated with a certificate. Besides service principal, user principal
is also supported by having permissions delegated through another AAD application. In this case, instead of
passwords or certificates, users would be prompted for two-factor authentication when acquiring token.
See also
What is Cognitive Services?
Cognitive Services pricing
Custom subdomains
Multivariate time series Anomaly Detector best
practices
4/12/2021 • 6 minutes to read • Edit Online
This article will provide guidance around recommended practices to follow when using the multivariate
Anomaly Detector APIs.
Parameters
Sliding window
Multivariate anomaly detection takes a segment of data points of length slidingWindow as input and decides if
the next data point is an anomaly. The larger the sample length, the more data will be considered for a decision.
You should keep two things in mind when choosing a proper value for slidingWindow : properties of input data,
and the trade-off between training/inference time and potential performance improvement. slidingWindow
consists of an integer between 28 and 2880. You may decide how many data points are used as inputs based on
whether your data is periodic, and the sampling rate for your data.
When your data is periodic, you may include 1 - 3 cycles as an input and when your data is sampled at a high
frequency (small granularity) like minute-level or second-level data, you may select more data as an input.
Another issue is that longer inputs may cause longer training/inference time, and there is no guarantee that
more input points will lead to performance gains. Whereas too few data points, may make the model difficult to
converge to an optimal solution. For example, it is hard to detect anomalies when the input data only has two
points.
Align mode
The parameter alignMode is used to indicate how you want to align multiple time series on time stamps. This is
because many time series have missing values and we need to align them on the same time stamps before
further processing. There are two options for this parameter, inner join and outer join . inner join means
we will report detection results on timestamps on which ever y time series has a value, while outer join
means we will report detection results on time stamps for any time series that has a value. The alignMode
will also affect the input sequence of the model , so choose a suitable alignMode for your scenario
because the results might be significantly different.
Here we show an example to explain different alignModel values.
Series1
T IM ESTA M P VA L UE
2020-11-01 1
2020-11-02 2
2020-11-04 4
2020-11-05 5
Series2
T IM ESTA M P VA L UE
2020-11-01 1
2020-11-02 2
2020-11-03 3
2020-11-04 4
2020-11-01 1 1
2020-11-02 2 2
2020-11-04 4 4
2020-11-01 1 1
2020-11-02 2 2
2020-11-03 NA 3
2020-11-04 4 4
2020-11-05 5 NA
O P T IO N M ET H O D
Model analysis
Training latency
Multivariate Anomaly Detection training can be time-consuming. Especially when you have a large quantity of
timestamps used for training. Therefore, we allow part of the training process to be asynchronous. Typically,
users submit train task through Train Model API. Then get model status through the Get Multivariate Model API
. Here we demonstrate how to extract the remaining time before training completes. In the Get Multivariate
Model API response, there is an item named diagnosticsInfo . In this item, there is a modelState element. To
calculate the remaining time, we need to use epochIds and latenciesInSeconds . An epoch represents one
complete cycle through the training data. Every 10 epochs, we will output status information. In total, we will
train for 100 epochs, the latency indicates how long an epoch takes. With this information, we know remaining
time left to train the model.
Model performance
Multivariate Anomaly Detection, as an unsupervised model. The best way to evaluate it is to check the anomaly
results manually. In the Get Multivariate Model response, we provide some basic info for us to analyze model
performance. In the modelState element returned by the Get Multivariate Model API, we can use trainLosses
and validationLosses to evaluate whether the model has been trained as expected. In most cases, the two
losses will decrease gradually. Another piece of information for us to analyze model performance against is in
variableStates . The variables state list is ranked by filledNARatio in descending order. The larger the worse
our performance, usually we need to reduce this NA ratio as much as possible. NA could be caused by missing
values or unaligned variables from a timestamp perspective.
Next steps
Quickstarts.
Learn about the underlying algorithms that power Anomaly Detector Multivariate
Best practices for using the Anomaly Detector API
3/5/2021 • 4 minutes to read • Edit Online
The Anomaly Detector API is a stateless anomaly detection service. The accuracy and performance of its results
can be impacted by:
How your time series data is prepared.
The Anomaly Detector API parameters that were used.
The number of data points in your API request.
Use this article to learn about best practices for using the API to get the best results for your data.
Below is the same data set using batch anomaly detection. The model built for the operation has ignored several
anomalies, marked by rectangles.
Data preparation
The Anomaly Detector API accepts time series data formatted into a JSON request object. A time series can be
any numerical data recorded over time in sequential order. You can send windows of your time series data to the
Anomaly Detector API endpoint to improve the API's performance. The minimum number of data points you can
send is 12, and the maximum is 8640 points. Granularity is defined as the rate that your data is sampled at.
Data points sent to the Anomaly Detector API must have a valid Coordinated Universal Time (UTC) timestamp,
and a numerical value.
{
"granularity": "daily",
"series": [
{
"timestamp": "2018-03-01T00:00:00Z",
"value": 32858923
},
{
"timestamp": "2018-03-02T00:00:00Z",
"value": 29615278
},
]
}
If your data is sampled at a non-standard time interval, you can specify it by adding the customInterval
attribute in your request. For example, if your series is sampled every 5 minutes, you can add the following to
your JSON request:
{
"granularity" : "minutely",
"customInterval" : 5
}
Next steps
What is the Anomaly Detector API?
Quickstart: Detect anomalies in your time series data using the Anomaly Detector
Predictive maintenance solution with Anomaly
Detector multivariate
4/12/2021 • 2 minutes to read • Edit Online
Many different industries need predictive maintenance solutions to reduce risks and gain actionable insights
through processing data from their equipment. Predictive maintenance evaluates the condition of equipment by
performing online monitoring. The goal is to perform maintenance before the equipment degrades or breaks
down.
Monitoring the health status of equipment can be challenging, as each component inside the equipment can
generate dozens of signals, for example vibration, orientation, and rotation. This can be even more complex
when those signals have an implicit relationship, and need to be monitored and analyzed together. Defining
different rules for those signals and correlating them with each other manually can be costly. Anomaly
Detector's multivariate feature allows:
Multiple correlated signals to be monitored together, and the inter-correlations between them are accounted
for in the model.
In each captured anomaly, the contribution rank of different signals can help with anomaly explanation, and
incident root cause analysis.
The multivariate anomaly detection model is built in an unsupervised manner. Models can be trained
specifically for different types of equipment.
Here, we provide a reference architecture for a predictive maintenance solution based on Anomaly Detector
multivariate.
Reference architecture
In the above architecture, streaming events coming from sensor data will be stored in Azure Data Lake and then
processed by a data transforming module to be converted into a time-series format. Meanwhile, the streaming
event will trigger real-time detection with the trained model. In general, there will be a module to manage the
multivariate model life cycle, like Bridge Service in this architecture.
Model training : Before using the Anomaly Detector multivariate to detect anomalies for a component or
equipment. We need to train a model on specific signals (time-series) generated by this entity. The Bridge
Service will fetch historical data and submit a training job to the Anomaly Detector and then keep the Model ID
in the Model Meta storage.
Model validation : Training time of a certain model could be varied based on the training data volume. The
Bridge Service could query model status and diagnostic info on a regular basis. Validating model quality could
be necessary before putting it online. If there are labels in the scenario, those labels can be used to verify the
model quality. Otherwise, the diagnostic info can be used to evaluate the model quality, and you can also
perform detection on historical data with the trained model and evaluate the result to backtest the validity of the
model.
Model inference : Online detection will be performed with the valid model, and the result ID can be stored in
the Inference table. Both the training process and the inference process are done in an asynchronous manner. In
general, a detection task can be completed within seconds. Signals used for detection should be the same ones
that have been used for training. For example, if we use vibration, orientation, and rotation for training, in
detection the three signals should be included as an input.
Incident aler ting The detection results can be queried with result IDs. Each result contains severity of each
anomaly, and contribution rank. Contribution rank can be used to understand why this anomaly happened, and
which signal caused this incident. Different thresholds can be set on the severity to generate alerts and
notifications to be sent to field engineers to conduct maintenance work.
Next steps
Quickstarts.
Best Practices: This article is about recommended patterns to use with the multivariate APIs.
Troubleshooting the multivariate API
4/12/2021 • 4 minutes to read • Edit Online
This article provides guidance on how to troubleshoot and remediate common HTTP error messages when
using the multivariate API.
Multivariate error codes
M ET H O D H T T P ERRO R C O DE ERRO R M ESSA GE A C T IO N TO TA K E
Train a Multivariate 400 The 'source' field is The key word "source" has
Anomaly Detection Model required in the not been specified correctly.
request.
The format should be
"{\"source\": \|" <SAS
URL>\"}"
Train a Multivariate 400 The source field must The source field must be a
Anomaly Detection Model be a valid sas blob valid blob container sas url.
url
Train a Multivariate 401 Unable to download The URL does not have the
Anomaly Detection Model blobs on the Azure right permissions. The list
Blob storage account.
flag is not set. The customer
should re-create the SAS
URL and make sure the
read and list flags is
checked (for example using
Storage Explorer)
M ET H O D H T T P ERRO R C O DE ERRO R M ESSA GE A C T IO N TO TA K E
Train a Multivariate 413 Unable to process the The data in the blob
Anomaly Detection Model dataset. Number of container exceeds the limit
variables exceed the
limit (300). of currently 300 variables.
The customer has to point
to reduce the variable size.
Detect Multivariate 404 The model does not The model ID is invalid.
Anomaly exist. Customers need to train a
model before using it.
Detect Multivariate 400 The model is not ready The model is not ready yet.
Anomaly yet. Customers need to call Get
Multivariate Model api to
check model status.
Detect Multivariate 400 The 'source' field is The key word "source" has
Anomaly required in the not been specified correctly.
request.
The format should be
"{\"source\": \|" <SAS
URL>\"}"
Detect Multivariate 400 The source field must The source field must be a
Anomaly be a valid sas blob valid blob container sas url.
url
Detect Multivariate 400 The corresponding file One variable has been used
Anomaly of the variable does in train, but it cannot be
not exist.
found when the customer
uses the corresponding
model to do detection.
Customers need to add this
variable and then submit
the detection request.
Detect Multivariate 413 Unable to process the The data in the blob
Anomaly dataset. Number of container exceeds the limit
variables exceed the
limit (300). of currently 300 variables.
The customer has to point
to reduce the variable size.
Get Multivariate Model 404 Model with 'id=<input The ID is not a valid model
model ID>' not found. ID. Use GET models to find
all valid model Ids.
Get Multivariate Model 404 Model with id=<input The ID is not a valid model
model ID>' not found. ID. Use GET models to find
all valid model Ids.
Get Multivariate Anomaly 404 Result with 'id=<input The ID is not a valid result
Detection Result result ID>' not found. ID. Resubmit your detection
request.
Delete Multivariate Model 404 Location for model The ID is not a valid model
with 'id=<input model ID. Use GET models to find
ID>' not found.
all valid model Ids.
Tutorial: Visualize anomalies using batch detection
and Power BI
3/26/2021 • 5 minutes to read • Edit Online
Use this tutorial to find anomalies within a time series data set as a batch. Using Power BI desktop, you will take
an Excel file, prepare the data for the Anomaly Detector API, and visualize statistical anomalies throughout it.
In this tutorial, you'll learn how to:
Use Power BI Desktop to import and transform a time series data set
Integrate Power BI Desktop with the Anomaly Detector API for batch anomaly detection
Visualize anomalies found within your data, including expected and seen values, and anomaly detection
boundaries.
Prerequisites
An Azure subscription
Microsoft Power BI Desktop, available for free.
An excel file (.xlsx) containing time series data points. The example data for this quickstart can be found on
GitHub
Once you have your Azure subscription, create an Anomaly Detector resource in the Azure portal to get your
key and endpoint.
You will need the key and endpoint from the resource you create to connect your application to the
Anomaly Detector API. You'll do this later in the quickstart.
NOTE
For best results when using the Anomaly Detector API, your JSON-formatted time series data should include:
data points separated by the same interval, with no more than 10% of the expected number of points missing.
at least 12 data points if your data doesn't have a clear seasonal pattern.
at least 4 pattern occurrences if your data does have a clear seasonal pattern.
NOTE
Power BI can use data from a wide variety of sources, such as .csv files, SQL databases, Azure blob storage, and more.
In the main Power BI Desktop window, click the Home ribbon. In the External data group of the ribbon, open
the Get Data drop-down menu and click Excel .
After the dialog appears, navigate to the folder where you downloaded the example .xlsx file and select it. After
the Navigator dialogue appears, click Sheet1 , and then Edit .
Power BI will convert the timestamps in the first column to a Date/Time data type. These timestamps must be
converted to text in order to be sent to the Anomaly Detector API. If the Power Query editor doesn't
automatically open, click Edit Queries on the home tab.
Click the Transform ribbon in the Power Query Editor. In the Any Column group, open the Data Type: drop-
down menu, and select Text .
When you get a notice about changing the column type, click Replace Current . Afterwards, click Close &
Apply or Apply in the Home ribbon.
Within the Advanced Editor, use the following Power Query M snippet to extract the columns from the table and
send it to the API. Afterwards, the query will create a table from the JSON response, and return it. Replace the
apiKey variable with your valid Anomaly Detector API key, and endpoint with your endpoint. After you've
entered the query into the Advanced Editor, click Done .
(table as table) => let
respTable = Table.FromColumns({
Table.Column(inputTable, "Timestamp")
,Table.Column(inputTable, "Value")
, Record.Field(jsonresp, "IsAnomaly") as list
, Record.Field(jsonresp, "ExpectedValues") as list
, Record.Field(jsonresp, "UpperMargins")as list
, Record.Field(jsonresp, "LowerMargins") as list
, Record.Field(jsonresp, "IsPositiveAnomaly") as list
, Record.Field(jsonresp, "IsNegativeAnomaly") as list
results = Table.TransformColumnTypes(
respTable4,
{{"Timestamp", type datetime}, {"Value", type number}, {"IsAnomaly", type logical},
{"IsPositiveAnomaly", type logical}, {"IsNegativeAnomaly", type logical},
{"ExpectedValues", type number}, {"UpperMargins", type number}, {"LowerMargins", type
number}}
)
in results
Invoke the query on your data sheet by selecting Sheet1 below Enter Parameter , and click Invoke .
You may get a warning message when you attempt to run the query since it utilizes an external data source.
To fix this, click File , and Options and settings . Then click Options . Below Current File , select Privacy , and
Ignore the Privacy Levels and potentially improve performance .
Additionally, you may get a message asking you to specify how you want to connect to the API.
To fix this, Click Edit Credentials in the message. After the dialogue box appears, select Anonymous to
connect to the API anonymously. Then click Connect .
Afterwards, click Close & Apply in the Home ribbon to apply the changes.
Add the following fields from the Invoked Function to the chart's Values field. Use the below screenshot to
help build your chart.
Value
UpperMargins
LowerMargins
ExpectedValues
After adding the fields, click on the chart and resize it to show all of the data points. Your chart will look similar
to the below screenshot:
After clicking Ok , you will have a Value for True field, at the bottom of the list of your fields. Right-click it and
rename it to Anomaly . Add it to the chart's Values . Then select the Format tool, and set the X-axis type to
Categorical .
Apply colors to your chart by clicking on the Format tool and Data colors . Your chart should look something
like the following:
Next steps
Streaming anomaly detection with Azure Databricks
Azure Cognitive Services support and help options
3/20/2021 • 2 minutes to read • Edit Online
Are you just starting to explore the functionality of Azure Cognitive Services? Perhaps you are implementing a
new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here
are options for where you can get support, stay up-to-date, give feedback, and report bugs for Cognitive
Services.
Stay informed
Staying informed about features in a new release or news on the Azure blog can help you find the difference
between a programming error, a service bug, or a feature not yet available in Cognitive Services.
Learn more about product updates, roadmap, and announcements in Azure Updates.
See what Cognitive Services articles have recently been added or updated in What's new in docs?
News about Cognitive Services is shared in the Azure blog.
Join the conversation on Reddit about Cognitive Services.
Next steps
What are Azure Cognitive Services?
Featured User-generated content for the Anomaly
Detector API
3/5/2021 • 2 minutes to read • Edit Online
Use this article to discover how other customers are thinking about and using the Anomaly Detector API. The
following resources were created by the community of Anomaly Detector users. They include open-source
projects, and other contributions created by both Microsoft and third-party users. Some of the following links
are hosted on websites that are external to Microsoft and Microsoft is not responsible for the content there. Use
discretion when you refer to these resources.
Technical blogs
Trying the Cognitive Service: Anomaly Detector API (in Japanese)
Open-source projects
Jupyter notebook demonstrating Anomaly Detection and streaming to Power BI
If you'd like to nominate a resource, fill a short form. Contact AnomalyDetector@microsoft.com or raise an issue
on GitHub if you'd like us to remove the content.