Professional Documents
Culture Documents
This lab is meant for an Artificial Intelligence (AI) Engineer or an AI Developer on Azure. To
ensure you have time to work through the exercises, there are certain requirements to meet
before starting the labs for this course.
You should ideally have some previous exposure to Visual Studio. We will be using it for
everything we are building in the labs, so you should be familiar with how to use it
(https://docs.microsoft.com/en-us/visualstudio/ide/visual-studio-ide ) to create
applications. Additionally, this is not a class where we teach code or development. We
assume you have some familiarity with C# (intermediate level - you can learn here
(https://mva.microsoft.com/en-us/training-courses/c-fundamentals-for-absolute-
beginners-16169?l=Lvld4EQIC_2706218949 ) and here (https://docs.microsoft.com/en-
us/dotnet/csharp/quick-starts/)), but you do not know how to implement solutions with
Cognitive Services.
Account Setup
Note You can use different environments for completing this lab. Your instructor will guide you
through the necessary steps to get the environment up and running. This could be as simple as
using the computer you are logged into in the classroom or as complex as setting up a virtualized
environment. The labs were created and tested using the Azure Data Science Virtual Machine
(DSVM) on Azure and as such, will require an Azure account to use.
If you have been given an Azure Pass to complete this lab, you may go
to http://www.microsoftazurepass.com/ to activate it. Please follow the instructions
at https://www.microsoftazurepass.com/howto , which document the activation process. A
Microsoft account may have one free trial on Azure and one Azure Pass associated with it,
so if you have already activated an Azure Pass on your Microsoft account, you will need to
use the free trial or use another Microsoft account.
Environment Setup
These labs are intended to be used with the .NET Framework using Visual Studio
2019 runnning on a Microsoft Windows operating system. While there is a version of Visual
Studio for Mac OS, certain features in the sample code are not supported on the Mac OS
platform. As a result, there is a hosted lab option available using a virtual machine solution.
Your instructor will have the details on using the VM solution. The original workshop was
designed to be used, and was tested with, the Azure Data Science Virtual Machine (DSVM).
Only premium Azure subscriptions can actually create a DSVM resource on Azure but the
labs can be completed with a local computer running Visual Studio 2019 and the required
software downloads listed throughout the lab steps.
Keys
Azure Setup
In the following steps, you will configure the Azure environment for the labs that follow.
Note You can create specific cognitive services resources or you can create a single
resource that contains all the endpoints.
8. Select Review + create
1. In the Azure Portal, click + Create a Resource and then enter storage in the search
box
9. Click Review + create
10. Click Create
16. Select OK
4. Click Create
6. Type a unique account name such as ai100cosmosdbgo, select a location that matches your
resource group
9. Select Review + create
10. Select Create
We will use the Bot Builder template for C# to create bots in this course.
2. Click Download
4. Ensure that all versions of Visual Studio are selected and click Install. If prompted, click End
Tasks.
5. Click Close. You should now have the bot templates added to your Visual Studio templates.
Bot Emulator
We will be developing a bot using the latest .NET SDK (v4). In order to do local testing, we'll
need to download the Bot Framework Emulator.
Credits
Labs in this series were adapted from the Cognitive Services
Tutorial (https://github.com/noodlefrenzy/CognitiveServicesTutorial ) and Learn AI
Bootcamp https://github.com/Azure/LearnAI-Bootcamp )
Resources
To deepen your understanding of the architecture described here, and to involve your
broader team in the development of AI solutions, we recommend reviewing the following
resources:
We're going to build an end-to-end application that allows you to pull in your own pictures,
use Cognitive Services to obtain a caption and some tags about the images. In later labs, we
will build a Bot Framework bot using LUIS to allow easy, targeted querying of such images.
While there is a focus on Cognitive Services, you will also leverage Visual Studio 2019,
Community Edition.
Note if you have not already, follow the directions for creating your Azure account,
Cognitive Services, and getting your api keys in Lab1-
Technical_Requirements.md (../Lab1-Technical_Requirements/02-
Technical_Requirements.md).
We will build a simple C# application that allows you to ingest pictures from your local
drive, then invoke the Computer Vision API (https://www.microsoft.com/cognitive-
services/en-us/computer-vision-api ) to analyze the images and obtain tags and a
description.
In the continuation of this lab throughout the course, we'll show you how to query your
data, and then build a Bot Framework (https://dev.botframework.com/) bot to query it.
Finally, we'll extend this bot with LUIS (https://www.microsoft.com/cognitive-services/en-
us/language-understanding-intelligent-service-luis ) to automatically derive intent from
your queries and use those to direct your searches intelligently.
2. Under SDK 2.2.402, download and run the SDK's Windows Installer by selecting x64.
- **Starter**:A starter project, which you can use if you want to learn about
creating the code that is used in the project.
- **Finished**: A finished project that you will make use of to implement computer
vision and work with the images in this lab.
Each folder contains a solution (.sln) that has several different projects for the lab, let's take
a high level look at them.
Cognitive Services
Cognitive Services can be used to infuse your apps, websites and bots with algorithms to
see, hear, speak, understand, and interpret your user needs through natural methods of
communication.
There are five main categories for the available Cognitive Services:
You can browse all of the specific APIs in the Services Directory
(https://azure.microsoft.com/en-us/services/cognitive-services/directory/ ).
Let's talk about how we're going to call Cognitive Services in our application by reviewing
the sample code in the Finished project.
You should be able to pick up this portable class library and drop it in your other projects
that involve Cognitive Services (some modification will be required depending on which
Cognitive Services you want to use).
You can find additional service helpers for some of the other Cognitive Services within the
Intelligent Kiosk sample application (https://github.com/Microsoft/Cognitive-Samples-
IntelligentKiosk/tree/master/Kiosk/ServiceHelpers ). Utilizing these resources makes it
easy to add and remove the service helpers in your future projects as needed.
You can see that we're calling for Caption and Tags from the images, as well as a
unique ImageId. "ImageInsights" pieces only the information we want together from the
Computer Vision API (or from Cognitive Services, if we choose to call multiple).
Now let's take a step back for a minute. It isn't quite as simple as creating
the ImageInsights class and copying over some methods/error handling from service
helpers. We still have to call the API and process the images somewhere. For the purpose of
this lab, we are going to walk through ImageProcessor.cs to understand how it is being
used. In future projects, feel free to add this class to your PCL and start from there (it will
need modification depending what Cognitive Services you are calling and what you are
processing - images, text, voice, etc.).
1. Navigate to ImageProcessor.cs within ProcessingLibrary.
Paste Content
using System;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.ProjectOxford.Vision;
using ServiceHelpers;
Paste Content
// Call the Computer Vision service and store the results in imageAnalysisResult:
var imageAnalysisResult = await
VisionServiceHelper.AnalyzeImageAsync(imageStreamCallback,
DefaultVisualFeaturesList);
// Return results:
return result;
}
3. Next, we want to call the Cognitive Service (specifically Computer Vision) and put the results
in imageAnalysisResult.
4. We use the code below to call the Computer Vision API (with the help
of VisionServiceHelper.cs) and store the results in imageAnalysisResult. Near the bottom
of VisionServiceHelper.cs, you will want to review the available methods for you to call
(RunTaskWithAutoRetryOnQuotaLimitExceededError, DescribeAsync, AnalyzeImageAsync,
RecognizeTextAsyncYou). You will use the AnalyzeImageAsync method in order to return the
visual features.
Paste Content
Now that we've called the Computer Vision service, we want to create an entry in
"ImageInsights" with only the following results: ImageId, Caption, and Tags (you can confirm
this by revisiting ImageInsights.cs).
Paste Content
So now we have the caption and tags that we need from the Computer Vision API, and each
image's result (with imageId) is stored in "ImageInsights".
6. Lastly, we need to close out the method by using the following line at the end of the method:
Paste Content
return result;
7. In order to use this application, we need to build the project, press Ctrl-Shift-B, of select
the Build menu and choose Build Solution.
Credits
This lab was modified from this Cognitive Services Tutorial
(https://github.com/noodlefrenzy/CognitiveServicesTutorial ).
Resources
Computer Vision API (https://www.microsoft.com/cognitive-services/en-us/computer-
vision-api)
Bot Framework (https://dev.botframework.com/)
Services Directory (https://azure.microsoft.com/en-us/services/cognitive-
services/directory/)
Portable Class Library (PCL) (https://docs.microsoft.com/en-us/dotnet/standard/cross-
platform/cross-platform-development-with-the-portable-class-library)
Intelligent Kiosk sample application (https://github.com/Microsoft/Cognitive-Samples-
IntelligentKiosk/tree/master/Kiosk/ServiceHelpers)
Azure Cosmos DB is Microsoft's resilient NoSQL PaaS solution and is incredibly useful for
storing loosely structured data like we have with our image metadata results. There are
other possible choices (Azure Table Storage, SQL Server), but Cosmos DB gives us the
flexibility to evolve our schema freely (like adding data for new services), query it easily, and
can be quickly integrated into Azure Cognitive Search (which we'll do in a later lab).
4. Open a command prompt and navigate to the build directory for the TestCLI project. It should
something like {GitHubDir} \Lab2-
Implement_Computer_Vision\code\Finished\TestCLI .
Paste Content
Usage: [options]
Options:
-force Use to force update even if file has already been added.
-settings The settings file (optional, will use embedded resource
settings.json if not set)
-process The directory to process
-query The query to run
-? | -h | --help Show help information
```
By default, it will load your settings from `settings.json` (it builds it into the
`.exe`), but you can provide your own using the `-settings` flag. To load images (and
their metadata from Cognitive Services) into your cloud storage, you can just tell
_TestCLI_ to `-process` your image directory as follows:
```cmd
dotnet run -- -process <%GitHubDir%>\Lab2-Implement_Computer_Vision\sample_images
```
> **Note** Replace the <%GitHubDir%> value with the folder where you cloned the
repository.
Once it's done processing, you can query against your Cosmos DB directly using
_TestCLI_ as follows:
```cmd
dotnet run -- -query "select * from images"
```
Take some time to look through the sample images (you can find them in
/sample_images) and compare the images to the results in your application.
> **Note** You can also browse the results in the CosmosDb resource in Azure. Open
the resource, then select **Data Explorer**. Expand the **metadata** database, then
select the **items** node. You will see several json documents that contains your
results.
Every new technology brings with it as many opportunities as questions, and AI-powered
technology has its own unique considerations. Be mindful of the following AI Ethics
principles when designing and implementing your AI tools:
Pre-requisities
A bot created using the Microsoft Bot Framework can be hosted at any publicly-accessible
URL. For the purposes of this lab, we will register our bot using Azure Bot Service
(https://docs.microsoft.com/en-us/bot-framework/bot-service-overview-introduction ).
2. In the portal, navigate to your resource group, then click +Add and search for bot.
4. For the name, you'll have to create a unique identifier. We recommend using something along the
lines of PictureBot[i][n] where [i] is your initials and [n] is a number (e.g. mine would be
PictureBotamt6).
5. Select a Location
8. Select C#, then select Echo Bot, later we will update it to our our PictureBot.
10. Configure a new App service plan (put it in the same location as your bot)
12. Do not change or click on Auto create App ID and password, we will get to that later.
13. Click Create
14. When it's deployed, search for App registrations in the Search resources, services,
and docs search bar at the top of the page.
15. Select Applications from personal account and click the new Web App Bot Resource.
20. Click Add
21. Record the secret into notepad or similar for later us in the lab(s).
22. Navigate to the Azure Web App Bot Resource you created, record the application id into notepad
or similar for later us in the lab(s).
23. Navigate back to the web app bot resource, select the Test in Web Chat tab
24. Once it starts, explore what it is capable of doing. As you will see, it only echos back your
message.
4. Click Next
Note If you do not see the Echo Bot template, you need to install the Visual Studio addins
from the pre-req steps.
6. Spend some time looking at all of the different things that are generated from the Echo Bot
template. We won't spend time explaining every single file, but we highly
recommend spending some time later working through and reviewing this sample (and the
other Web App Bot sample - Basic Bot), if you have not already. It contains important and useful
shells needed for bot development. You can find it and other useful shells and samples here
(https://github.com/Microsoft/BotBuilder-Samples).
7. Start by right-clicking on the Solution and selecting Build. This will restore the nuget packages.
8. Open the appsettings.json file, update it by adding your bot service information you recorded
above:
Paste Content
{
"MicrosoftAppId": "YOURAPPID",
"MicrosoftAppPassword": "YOURAPPSECRET"
}
11. Rename the class and then change all references to the class to PictureBot. You will know if you
missed one when you attempt to build the project.
13. Select the Browse tab, and install the following packages, ensure that you are using
version 4.6.3: This is accomplished by single-clicking the entry in the left pane and then using the
drop-down in the right to select the correct version.
Microsoft.Bot.Builder.Azure
Microsoft.Bot.Builder.AI.Luis
Microsoft.Bot.Builder.Dialogs
Microsoft.Azure.Search (version, 9.1.0 or later)
TIP: If you only have one monitor and you would like to easily switch between instructions
and Visual Studio, you can now add the instruction files to your Visual Studio solution by
right-clicking on the project in Solution Explorer and selecting Add > Existing Item.
Navigate to "Lab2," and add all the files of type "MD File."
So now that we've updated our base shell to support the naming and NuGet packages we'll
use throughout the rest of the labs, we're ready to start adding some custom code. First,
we'll just create a simple "Hello world" bot that helps you get warmed up to building bots
with the V4 SDK.
An important concept is the turn, used to describe a message to a user and a response from
the bot.
For example, if I say "Hello bot" and the bot responds "Hi, how are you?" that is one turn.
Check in the image below how a turn goes through the multiple layers of a bot application.
1. Open the PictureBot.cs file.
2. Review the OnMessageActivityAsync method with the code below. This method is called every
turn of the conversation. You'll see later why that fact is important, but for now, remember that
OnMessageActivityAsync is called on every turn.
3. Press F5 to start debugging.
Get stuck or broken? You can find the solution for the lab up until this point under
{GitHubPath}/code/Finished/PictureBot-Part0(./code/Finished/PictureBot-Part0). The
readme file within the solution (once you open it) will tell you what keys you need to add in
order to run the solution.
Type hello. The bot will respond with echoing your message similar to the Azure bot
we created earlier.
You can read more about using the Emulator here (https://docs.microsoft.com/en-
us/azure/bot-service/bot-service-debug-emulator?view=azure-bot-service-4.0 ).
Startup.cs: is where we will add services/middleware and configure the HTTP request
pipeline. There are many comments within to help you understand what is
happening. Spend a few minutes reading through.
Paste Content
using System;
using System.Linq;
using System.Text.RegularExpressions;
using Microsoft.Bot.Builder.Integration;
using Microsoft.Bot.Configuration;
using Microsoft.Bot.Connector.Authentication;
using Microsoft.Extensions.Options;
using Microsoft.Extensions.Logging;
using PictureBotProj;
using Microsoft.Bot.Builder.AI.Luis;
using Microsoft.Bot.Builder.Dialogs;
We won't use all of the above namespaces just yet, but can you guess when we might?
If you're unfamiliar with dependency injection, you can read more about it
here (https://docs.microsoft.com/en-us/aspnet/web-
api/overview/advanced/dependency-injection ).
You can use local memory for this lab and testing purposes. For
production, you'll have to implement a way to manage state data
(https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-storage-
concept?view=azure-bot-service-4.0 ). In the big chunk of comments
within ConfigureServices, there are some tips for this.
At the bottom of the method, you may notice we create and register state
accessors. Managing state is a key in creating intelligent bots, which you
can read more about here (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-dialog-state?view=azure-bot-service-4.0 ).
Fortunately, this shell is pretty comprehensive, so we only have to add two items:
Middleware
Custom state accessors.
Middleware is simply a class or set of classes that sit between the adapter and your bot
logic, and are added to your adapter's middleware collection during initialization.
The SDK allows you to write your own middleware or add reusable components of
middleware created by others. Every activity coming in or out of your bot flows through
your middleware. We'll get deeper into this later in the lab, but for now, it's important to
understand that every activity flows through your middleware, because it is located in
the ConfigureServices method that gets called at run time (which runs in between every
message being sent by a user and OnMessageActivityAsync).
1. Add a new folder called Middleware by right-clicking the project name, in Solution Explorer,
and selecting Add and then New Folder
Paste Content
Paste Content
services.AddTransient<IBot, PictureBot.Bots.PictureBot>();
services.AddBot<PictureBot.Bots.PictureBot>(options =>
{
var appId = Configuration.GetSection("MicrosoftAppId")?.Value;
var appSecret =
Configuration.GetSection("MicrosoftAppPassword")?.Value;
// Catches any errors that occur during a conversation turn and logs them.
options.OnTurnError = async (context, exception) =>
{
logger.LogError($"Exception caught : {exception}");
await context.SendActivityAsync("Sorry, it looks like something went
wrong.");
};
// The Memory Storage used here is for local bot debugging only. When the bot
// is restarted, everything stored in memory will be gone.
IStorage dataStore = new MemoryStorage();
options.State.Add(conversationState);
});
Paste Content
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
_loggerFactory = loggerFactory;
app.UseDefaultFiles()
.UseStaticFiles()
.UseBotFramework();
app.UseMvc();
}
Before we talk about the custom state accessors that we need, it's important to back up.
Dialogs, which we'll really get into in the next section, are an approach to implementing
multi-turn conversation logic, which means they'll need to rely on a persisted state to know
where in the conversation the users are. In our dialog-based bot, we'll use a DialogSet to
hold the various dialogs. The DialogSet is created with a handle to an object called an
"accessor".
For each accessor we create, we have to first give it a property name. For our scenario, we
want to keep track of a few things:
PictureState
DialogState
Paste Content
var conversationState =
options.State.OfType<ConversationState>().FirstOrDefault();
if (conversationState == null)
{
throw new InvalidOperationException("ConversationState must be defined and
added before adding conversation-scoped state accessors.");
}
// Create the custom state accessor.
// State accessors enable other components to read and write individual
properties of state.
var accessors = new PictureBotAccessors(conversationState)
{
PictureState =
conversationState.CreateProperty<PictureState>(PictureBotAccessors.PictureStateName),
DialogStateAccessor =
conversationState.CreateProperty<DialogState>("DialogState"),
};
return accessors;
});
You should see an error (red squiggly) beneath some of the terms. But before fixing them,
you may be wondering why we had to create two accessors, why wasn't one enough?
Don't worry about the dialog jargon yet, but the process should make sense. If you're
confused, you can dive deeper into how state works (https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-dialog-state?view=azure-bot-service-4.0 ).
Now back to the errors you're seeing. You've said you're going to store this information, but
you haven't yet specified where or how. We have to update "PictureState.cs" and
"PictureBotAccessor.cs" to have and access the information we want to store.
2. Right-click the project and select Add->Class, select a Class file and name it PictureState
using System.Collections.Generic;
namespace Microsoft.PictureBot
{
/// <summary>
/// Stores counter state for the conversation.
/// Stored in <see cref="Microsoft.Bot.Builder.ConversationState"/> and
/// backed by <see cref="Microsoft.Bot.Builder.MemoryStorage"/>.
/// </summary>
public class PictureState
{
/// <summary>
/// Gets or sets the number of turns in the conversation.
/// </summary>
/// <value>The number of turns in the conversation.</value>
public string Greeted { get; set; } = "not greeted";
public string Search { get; set; } = "";
public string Searching { get; set; } = "no";
}
}
4. Review the code. This is where we'll store information about the active conversation. Feel free to
add some comments explaining the purposes of the strings. Now that you have PictureState
appropriately initialized, you can create the PictureBotAccessor, to remove the errors you were
getting in Startup.cs.
5. Right-click the project and select Add->Class, select a Class file and name
it PictureBotAccessors
6. Copy the following into it:
Paste Content
using System;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Builder.Dialogs;
namespace Microsoft.PictureBot
{
/// <summary>
/// This class is created as a Singleton and passed into the IBot-derived
constructor.
/// - See <see cref="PictureBot"/> constructor for how that is injected.
/// - See the Startup.cs file for more details on creating the Singleton that
gets
/// injected into the constructor.
/// </summary>
public class PictureBotAccessors
{
/// <summary>
/// Initializes a new instance of the <see cref="PictureBotAccessors"/>
class.
/// Contains the <see cref="ConversationState"/> and associated <see
cref="IStatePropertyAccessor{T}"/>.
/// </summary>
/// <param name="conversationState">The state object that stores the
counter.</param>
public PictureBotAccessors(ConversationState conversationState)
{
ConversationState = conversationState ?? throw new
ArgumentNullException(nameof(conversationState));
}
/// <summary>
/// Gets the <see cref="IStatePropertyAccessor{T}"/> name used for the <see
cref="CounterState"/> accessor.
/// </summary>
/// <remarks>Accessors require a unique name.</remarks>
/// <value>The accessor name for the counter accessor.</value>
public static string PictureStateName { get; } =
$"{nameof(PictureBotAccessors)}.PictureState";
/// <summary>
/// Gets or sets the <see cref="IStatePropertyAccessor{T}"/> for
CounterState.
/// </summary>
/// <value>
/// The accessor stores the turn count for the conversation.
/// </value>
public IStatePropertyAccessor<PictureState> PictureState { get; set; }
/// <summary>
/// Gets the <see cref="ConversationState"/> object for the conversation.
/// </summary>
/// <value>The <see cref="ConversationState"/> object.</value>
public ConversationState ConversationState { get; }
8. Wondering if you configured it correctly? Return to Startup.cs and confirm your errors around
creating the custom state accessors have been resolved.
There are many different methods and preferences for developing bots. The SDK allows you
to organize your code in whatever way you want. In these labs, we'll organize our
conversations into different dialogs, and we'll explore a MVVM style
(https://msdn.microsoft.com/en-us/library/hh848246.aspx ) of organizing code around
conversations.
1. Create two new folders "Responses" and "Models" within your project. (Hint: You can right-
click on the project and select "Add->Folder").
You may already be familiar with Dialogs and how they work. If you aren't, read this page on
Dialogs (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-dialog-manage-
conversation-flow?view=azure-bot-service-4.0&tabs=csharp ) before continuing.
When a bot has the ability to perform multiple tasks, it is nice to be able to have multiple
dialogs, or a set of dialogs, to help users navigate through different conversation flows. For
our PictureBot, we want our users to be able to go through an initial menu flow, often
referred to as a main dialog, and then branch off to different dialogs depending what they
are trying to do - search pictures, share pictures, order pictures, or get help. We can do this
easily by using a dialog container or what's referred to here as a DialogSet. Read about
creating modular bot logic and complex dialogs( https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-compositcontrol?view=azure-bot-service-
4.0&tabs=csharp) before continuing.
For the purposes of this lab, we are going to keep things fairly simple, but after, you should
be able to create a dialog set with many dialogs. For our PictureBot, we'll have two main
dialogs:
MainDialog - The default dialog the bot starts out with. This dialog will start other
dialog(s) as the user requests them. This dialog, as it is the main dialog for the dialog
set, will be responsible for creating the dialog container and redirecting users to
other dialogs as needed.
Paste Content
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Schema;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Extensions.Logging;
using System.Linq;
using PictureBot.Models;
using PictureBot.Responses;
using Microsoft.Bot.Builder.AI.Luis;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using System;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.PictureBot;
You've just added access to your Models/Responses, as well as to the services LUIS and
Azure Search. Finally, the Newtonsoft references will help you parse the responses from
LUIS, which we will see in a subsequent lab.
Next, we'll need to override the OnTurnAsync method with one that processes incoming
messages and then routes them through the various dialogs.
Paste Content
/// <summary>
/// Represents a bot that processes incoming activities.
/// For each user interaction, an instance of this class is created and the
OnTurnAsync method is called.
/// This is a Transient lifetime service. Transient lifetime services are created
/// each time they're requested. For each Activity received, a new instance of this
/// class is created. Objects that are expensive to construct, or have a lifetime
/// beyond the single turn, should be carefully managed.
/// For example, the <see cref="MemoryStorage"/> object and associated
/// <see cref="IStatePropertyAccessor{T}"/> object are created with a singleton
lifetime.
/// </summary>
/// <seealso cref="https://docs.microsoft.com/en-
us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.1"/>
/// <summary>Contains the set of dialogs and prompts for the picture bot.</summary>
public class PictureBot : ActivityHandler
{
private readonly PictureBotAccessors _accessors;
// Initialize LUIS Recognizer
/// <summary>
/// Every conversation turn for our PictureBot will call this method.
/// There are no dialogs used, since it's "single turn" processing, meaning a
single
/// request and response. Later, when we add Dialogs, we'll have to navigate
through this method.
/// </summary>
/// <param name="turnContext">A <see cref="ITurnContext"/> containing all the
data needed
/// for processing this conversation turn. </param>
/// <param name="cancellationToken">(Optional) A <see cref="CancellationToken"/>
that can be used by other objects
/// or threads to receive notice of cancellation.</param>
/// <returns>A <see cref="Task"/> that represents the work queued to
execute.</returns>
/// <seealso cref="BotStateSet"/>
/// <seealso cref="ConversationState"/>
/// <seealso cref="IMiddleware"/>
public override async Task OnTurnAsync(ITurnContext turnContext,
CancellationToken cancellationToken = default(CancellationToken))
{
if (turnContext.Activity.Type is "message")
{
// Establish dialog context from the conversation state.
var dc = await _dialogs.CreateContextAsync(turnContext);
// Continue any current dialog.
var results = await dc.ContinueDialogAsync(cancellationToken);
_logger = loggerFactory.CreateLogger<PictureBot>();
_logger.LogTrace("PictureBot turn start.");
_accessors = accessors ?? throw new
System.ArgumentNullException(nameof(accessors));
};
// Add named dialogs to the DialogSet. These names are saved in the dialog
state.
_dialogs.Add(new WaterfallDialog("mainDialog", main_waterfallsteps));
_dialogs.Add(new WaterfallDialog("searchDialog", search_waterfallsteps));
// The following line allows us to use a prompt within the dialogs
_dialogs.Add(new TextPrompt("searchPrompt"));
}
// Add MainDialog-related tasks
Spend some time reviewing and discussing this shell with a fellow workshop participant. You
should understand the purpose of each line before continuing.
We'll add some more to this in a bit. You can ignore any errors for now.
25. Lab 3: Creating a Basic filtering bot
Responses:
So before we fill out our dialogs, we need to have some responses ready. Remember, we're
going to keep dialogs and responses separate, because it results in cleaner code, and an
easier way to follow the logic of the dialogs. If you don't agree or understand now, you will
soon.
Paste Content
using System.Threading.Tasks;
using Microsoft.Bot.Builder;
namespace PictureBot.Responses
{
public class MainResponses
{
public static async Task ReplyWithGreeting(ITurnContext context)
{
// Add a greeting
}
public static async Task ReplyWithHelp(ITurnContext context)
{
await context.SendActivityAsync($"I can search for pictures, share
pictures and order prints of pictures.");
}
public static async Task ReplyWithResumeTopic(ITurnContext context)
{
await context.SendActivityAsync($"What can I do for you?");
}
public static async Task ReplyWithConfused(ITurnContext context)
{
// Add a response for the user if Regex or LUIS doesn't know
// What the user is trying to communicate
await context.SendActivityAsync($"I'm not understand you.");
}
public static async Task ReplyWithLuisScore(ITurnContext context, string key,
double score)
{
await context.SendActivityAsync($"Intent: {key} ({score}).");
}
public static async Task ReplyWithShareConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Posting your picture(s) on
twitter...");
}
public static async Task ReplyWithSearchConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Searching your picture(s) on
twitter...");
}
public static async Task ReplyWithOrderConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Ordering standard prints of your
picture(s)...");
}
}
}
Note that there are two responses with no values (ReplyWithGreeting and
ReplyWithConfused). Fill these in as you see fit.
Paste Content
using Microsoft.Bot.Builder;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Bot.Schema;
namespace PictureBot.Responses
{
public class SearchResponses
{
// add a task called "ReplyWithSearchRequest"
// it should take in the context and ask the
// user what they want to search for
public static async Task ReplyWithSearchConfirmation(ITurnContext context,
string utterance)
{
await context.SendActivityAsync($"Ok, searching for pictures of
{utterance}");
}
public static async Task ReplyWithNoResults(ITurnContext context, string
utterance)
{
await context.SendActivityAsync("There were no results found for \"" +
utterance + "\".");
}
}
}
4. Notice a whole task is missing. Fill in as you see fit, but make sure the new task has the name
"ReplyWithSearchRequest", or you may have issues later.
Due to time limitations, we will not be walking through creating all the models. They are
straightforward, but we recommend taking some time to review the code within after you've
added them.
There are a number of things that we can do to improve our bot. First of all, we may not
want to call LUIS for a simple "search pictures" message, which the bot will get fairly
frequently from its users. A simple regular expression could match this, and save us time
(due to network latency) and money (due to cost of calling the LUIS service).
Also, as the complexity of our bot grows and we are taking the user's input and using
multiple services to interpret it, we need a process to manage that flow. For example, try
regular expressions first, and if that doesn't match, call LUIS, and then perhaps we also drop
down to try other services like QnA Maker(http://qnamaker.ai) or Azure Search. A great way
to manage this is through Middleware (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-concept-middleware?view=azure-bot-service-4.0 ), and the SDK does a
great job supporting that.
Before continuing with the lab, learn more about middleware and the Bot Framework SDK:
Overview and Architecture (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-basics?view=azure-bot-service-4.0 )
Middleware (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-
concept-middleware?view=azure-bot-service-4.0 )
Creating Middleware (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-create-middleware?view=azure-bot-service-4.0&tabs=csaddmiddleware
%2Ccsetagoverwrite%2Ccsmiddlewareshortcircuit%2Ccsfallback
%2Ccsactivityhandler)
Ultimately, we'll use some middleware to try to understand what users are saying with
regular expressions (Regex) first, and if we can't, we'll call LUIS. If we still can't, then we'll
drop down to a generic "I'm not sure what you mean" response, or whatever you put for
"ReplyWithConfused."
1. In Startup.cs, below the "Add Regex below" comment within ConfigureServices, add the
following:
Paste Content
middleware.Add(new RegExpRecognizerMiddleware()
.AddIntent("search", new Regex("search picture(?:s)*(.*)|search pic(?:s)*(.*)",
RegexOptions.IgnoreCase))
.AddIntent("share", new Regex("share picture(?:s)*(.*)|share pic(?:s)*(.*)",
RegexOptions.IgnoreCase))
.AddIntent("order", new Regex("order picture(?:s)*(.*)|order print(?:s)*(.*)|order
pic(?:s)*(.*)", RegexOptions.IgnoreCase))
.AddIntent("help", new Regex("help(.*)", RegexOptions.IgnoreCase)));
We're really just skimming the surface of using regular expressions. If you're interested, you
can learn more here (https://docs.microsoft.com/en-us/dotnet/standard/base-
types/regular-expression-language-quick-reference ).
2. You may notice that the options.State has been deprecated. Let's migrate to the newest
method:
3. Replace it with
Paste Content
3. Also replace the Configure code to now pull from the dependency injection version:
Paste Content
4. Replace it with
Paste Content
var conversationState =
services.BuildServiceProvider().GetService<ConversationState>();
if (conversationState == null)
{
throw new InvalidOperationException("ConversationState must be defined and added
before adding conversation-scoped state accessors.");
}
Without adding LUIS, our bot is really only going to pick up on a few variations, but it
should capture a good bit of messages, if the users are using the bot for searching and
sharing and ordering pictures.
Aside: One might argue that the user shouldn't have to type "help" to get a menu of clear
options on what the bot can do; rather, this should be the default experience on first
contact with the bot. Discoverability is one of the biggest challenges for bots - letting the
users know what the bot is capable of doing. Good bot design principles
(https://docs.microsoft.com/en-us/bot-framework/bot-design-principles ) can help.
Let's get down to business. We need to fill out MainDialog within PictureBot.cs so that our
bot can react to what users say they want to do. Based on our results from Regex, we need
to direct the conversation in the right direction. Read the code carefully to confirm you
understand what it's doing.
Paste Content
// If we haven't greeted a user yet, we want to do that first, but for the rest of
the
// conversation we want to remember that we've already greeted them.
private async Task<DialogTurnResult> GreetingAsync(WaterfallStepContext stepContext,
CancellationToken cancellationToken)
{
// Get the state for the current step in the conversation
var state = await _accessors.PictureState.GetAsync(stepContext.Context, () => new
PictureState());
3. Using the bot emulator, test the bot by sending some commands:
help
share pics
order pics
search pics
Note If you get a 500 error in your bot, you can place a break point in the Startup.cs file
inside the OnTurnError delegate method. The most common error is a mismatch of the
AppId and AppSecret.
4. If the only thing that didn't give you the expected result was "search pics", everything is working
how you configured it. "search pics" failing is the expected behavior at this point in the lab, but
why? Have an answer before you move on!
Hint: Use break points to trace matching to case "search", starting from PictureBot.cs.
Get stuck or broken? You can find the solution for the lab up until this point under
resources/code/Finished(./code/Finished). The readme file within the solution (once you
open it) will tell you what keys you need to add in order to run the solution. We recommend
using this as a reference, not as a solution to run, but if you choose to run it, be sure to add
the necessary keys for your enviroment.
Resources
Bot Builder Basics (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-basics?view=azure-bot-service-4.0&tabs=cs)
.NET Bot Builder SDK tutorial (https://docs.microsoft.com/en-us/azure/bot-
service/dotnet/bot-builder-dotnet-sdk-quickstart?view=azure-bot-service-4.0)
Bot Service Documentation (https://docs.microsoft.com/en-us/azure/bot-
service/bot-service-overview-introduction?view=azure-bot-service-4.0)
Deploy your bot (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-deploy-az-cli?view=azure-bot-service-4.0&tabs=newrg)
This workshop demonstrates how you can perform logging using Microsoft Bot Framework
and store aspects of chat conversations. After completing these labs, you should be able to:
Understand how to intercept and log message activities between bots and users
Log utterances to file storage
Prerequisites
This lab starts from the assumption that you have built and published the bot from Lab 3.
It is recommended that you do that lab in order to be able to implement logging as covered
in this lab. If you have not, reading carefully through all the exercises and looking at some of
the code or using it in your own applications may be sufficient, depending on your needs
In this lab, we'll look at some different ways that the Bot Framework allows us to intercept
and log data from conversations that the bot has with users. To start we will utilize the In
Memory storage, this is good for testing purposes, but not ideal for production
environments.
Afterwards, we'll look at a very simple implementation of how we can write data from
conversations to files in Azure. Specifically, we'll put messages users send to the bot in a list,
and store the list, along with a few other items, in a temporary file (though you could
change this to a specific file path as needed)
Let's take a look and what information we can glean, for testing purposes, without adding
anything to our bot.
1. Open your PictureBot.sln in Visual Studio.
3. Open the bot in the Bot Framework Emulator and have a quick conversastion:
If you click on a message, you are able to see its associated JSON with the
"Inspector-JSON" tool on the right. Click on a message and inspect the JSON to see
what information you can obtain.
The "Log" in the bottom right-hand corner, contains a complete log of the
conversation. Let's dive into that a little deeper.
o The first thing you'll see is the port the Emulator is listening on
o You'll also see where ngrok is listening, and you can inspect the traffic to
ngrok using the "ngrok traffic inspector" link. However, you should notice that
we will bypass ngrok if we're hitting local addresses. ngrok is included here
informationally, as remote testing is not covered in this workshop
o If there is an error in the call (anything other than POST 200 or POST 201
reponse), you'll be able to click it and see a very detailed log in the
"Inspector-JSON". Depending on what the error is, you may even get a stack
trace going through the code and attempting to point out where the error
has occurred. This is greatly useful when you're debugging your bot projects.
o You can also see that there is a Luis Trace when we make calls out to LUIS. If
you click on the trace link, you're able to see the LUIS information. You may
notice that this is not set up in this particular lab.
You can read more about testing, debugging, and logging with the emulator here
(https://docs.microsoft.com/en-us/azure/bot-service/bot-service-debug-emulator?
view=azure-bot-service-4.0).
32. Lab 4: Log Chats
Lab 4.1: Logging to Azure Storage
The default bot storage provider uses in-memory storage that gets disposed of when the
bot is restarted. This is good for testing purposes only. If you want to persist data but do
not want to hook your bot up to a database, you can use the Azure storage provider or
build your own provider using the SDK.
1. Open the Startup.cs file. Since we want to use this process for every message, we'll use
the ConfigureServices method in our Startup class to add storing information to an Azure Blob
file. Notice that currently we're using:
Paste Content
As you can see, our current implementation is using in-memory storage. Again, this memory
storage is recommended for local bot debugging only. When the bot is restarted, anything
stored in memory will be gone.
Paste Content
var blobConnectionString =
Configuration.GetSection("BlobStorageConnectionString")?.Value;
var blobContainer = Configuration.GetSection("BlobStorageContainer")?.Value;
IStorage dataStore = new
Microsoft.Bot.Builder.Azure.AzureBlobStorage(blobConnectionString, blobContainer);
6. If you haven't already done so, click Access keys and record your connection string
Paste Content
"BlobStorageConnectionString": "",
"BlobStorageContainer" : "chatlog"
Note If you dont get a reply back, check your Azure Storage Account connection string
10. Switch to the Azure Portal, navigate to your blob storage account
12. Select the chat log file, it should start with emulator..., then select Edit. What do you see in the
files? What don't you see that you were expecting/hoping to see?
{"$type":"System.Collections.Concurrent.ConcurrentDictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Collections.Concurrent","DialogState":
{"$type":"Microsoft.Bot.Builder.Dialogs.DialogState,
Microsoft.Bot.Builder.Dialogs","DialogStack":
{"$type":"System.Collections.Generic.List`1[[Microsoft.Bot.Builder.Dialogs.DialogInst
ance, Microsoft.Bot.Builder.Dialogs]], System.Private.CoreLib","$values":
[{"$type":"Microsoft.Bot.Builder.Dialogs.DialogInstance,
Microsoft.Bot.Builder.Dialogs","Id":"mainDialog","State":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib","options":null,"values":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib"},"instanceId":"f80db88d-cdea-4b47-a3f6-
a5bfa26ed60b","stepIndex":0}}]}},"PictureBotAccessors.PictureState":
{"$type":"Microsoft.PictureBot.PictureState,
PictureBot","Greeted":"greeted","Search":"","Searching":"no"}}
For the purposes of this lab, we are going to focus on the actual utterances that users are
sending to the bot. This could be useful to determine what types of conversations and
actions users are trying to complete with the bot.
1. Open PictureState.cs
Paste Content
In the above, we're simple creating a list where we'll store the list of messages that users
send to the bot.
In this example we're choosing to use the state manager to read and write data, but you
could alternatively read and write directly from storage without using state manager
(https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-v4-storage?
view=azure-bot-service-4.0&tabs=csharpechorproperty%2Ccsetagoverwrite%2Ccsetag ).
If you choose to write directly to storage, you could set up eTags depending on your
scenario. By setting the eTag property to *, you could allow other instances of the bot to
overwrite previously written data, meaning that the last writer wins. We won't get into it
here, but you can read more about managing concurrency (https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-howto-v4-storage?view=azure-bot-service-
4.0&tabs=csharpechorproperty%2Ccsetagoverwrite%2Ccsetag#manage-concurrency-using-
etags).
The final thing we have to do before we run the bot is add messages to our list with
our OnTurn action.
Paste Content
add:
Paste Content
5. Have another conversation with the bot. Stop the bot and check the latest blob persisted log file.
What do we have now?
Paste Content
{"$type":"System.Collections.Concurrent.ConcurrentDictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Collections.Concurrent","DialogState":
{"$type":"Microsoft.Bot.Builder.Dialogs.DialogState,
Microsoft.Bot.Builder.Dialogs","DialogStack":
{"$type":"System.Collections.Generic.List`1[[Microsoft.Bot.Builder.Dialogs.DialogInst
ance, Microsoft.Bot.Builder.Dialogs]], System.Private.CoreLib","$values":
[{"$type":"Microsoft.Bot.Builder.Dialogs.DialogInstance,
Microsoft.Bot.Builder.Dialogs","Id":"mainDialog","State":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib","options":null,"values":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib"},"instanceId":"f80db88d-cdea-4b47-a3f6-
a5bfa26ed60b","stepIndex":0}}]}},"PictureBotAccessors.PictureState":
{"$type":"Microsoft.PictureBot.PictureState,
PictureBot","Greeted":"greeted","UtteranceList":
{"$type":"System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]],
System.Private.CoreLib","$values":["help"]},"Search":"","Searching":"no"}}
Get stuck or broken? You can find the solution for the lab up until this point under
/code/PictureBot-FinishedSolution-File(./code/PictureBot-FinishedSolution-File ). You will
need to insert the keys for your Azure Bot Service and your Azure Storage settings in
the appsettings.json file. We recommend using this code as a reference, not as a solution
to run, but if you choose to run it, be sure to add the necessary keys.
Going further
To incorporate database storage and testing into your logging solution, we recommend the
following self-led tutorials that build on this solution : Storing Data in
Cosmos(https://github.com/Azure/LearnAI-Bootcamp/blob/master/lab02.5-
logging_chat_conversations/3_Cosmos.md ).
In this lab we will explore the QnA Maker for creating bots that connect to a pre-trained
knowledgebase. Using QnAMaker, you can upload documents or point to web pages and
have it pre-populate a knowledge base that can feed a simple bot for common uses such as
frequently asked questions.
4. Select Create
5. Ensure you lab subscription and resource groups are selected
7. Select the Standard S0 tier for the resource pricing tier. We aren't using the free tier because
we will upload files that are larger than 1MB later.
8. Select the same location all our resources have been placed in and then for the search pricing tier,
select the Free F tier
9. The App name field should populate with the same value you used for Name in the previous
steps. If not, enter an app name, it must be unique
11. Select Review + Create and then Create. This will create the following resources in your
resource group:
2. In the top right, click Sign in. Login using the Azure credentials for your new QnA Maker
resource
5. Select your Azure AD and Subscription tied to your QnA Maker resource, then select your newly
created QnA Maker resource
8. For the file, click Add file, browse to the code/Manage Azure Blob Storage file
Note You can find out more about the supported file types and data sources here
(https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-
sources-supported)
1. Review the knowledge base QnA pairs, you should see ~200 different pairs based on the two
documents we fed it
3. On the publish page, click Publish. Once the service is published, click the Create Bot button on
the success page
5. On the bot service creation page, fix any naming errors, then click Create.
Note Recent change in Azure requires dashes ("-") be removed from some resource names
6. Once the bot resource is created, navigate to the new Web App Bot, then click Test in Web Chat
7. Ask the bot any questions related to a Surface Pro 4 or managing Azure Blog Storage:
3. Azure will build your source code, when complete, click Download Bot source code, if
prompted, select Yes
6. Open the Startup.cs file, you will notice nothing special has been added here
7. Open the Bots/{BotName}.cs file and review the code that was generated for this file
8. Open the BotService.cs file and review the code that was generated for this file.
Note: The previous steps for reviewing the generated code are correct for the solution that
was downloaded during the authoring of this lab. The solution files may change from time
to time based on updates to the frameworks and SDKs so the solution files may differ. The
key aspect here is just have you review the code that is automatically generated from the
tool.
Going Further
What would you do to integrate the QnAMaker code into your picture bot?
Resources
What is the QnA Maker service? (https://docs.microsoft.com/en-us/azure/cognitive-
services/qnamaker/overview/overview)
In this lab, you will build, train and publish a LUIS model to help your bot (which will be
created in a future lab) communicate effectively with human users.
Note In this lab, we will only be creating the LUIS model that you will use in a future lab to
build a more intelligent bot.
The LUIS functionality and functionality has already been covered in the workshop; for a
refresher on LUIS, read more(https://docs.microsoft.com/en-us/azure/cognitive-
services/LUIS/Home).
Now that we know what LUIS is, we'll want to plan our LUIS app. We already created a basic
bot ("PictureBot") that responds to messages containing certain text. We will need to create
intents that trigger the different actions that our bot can do, and create entities that require
different actions. For example, an intent for our PictureBot may be "OrderPic" and it triggers
the bot to provide an appropriate response.
For example, in the case of Search (not implemented here), our PictureBot intent may be
"SearchPic" and it triggers Azure Search service to look for photos, which requires a "facet"
entity to know what to search for. You can see more examples for planning your app
here(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/plan-your-app ).
Once we've thought out our app, we are ready to build and train
it(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-get-started-
create-app).
As a review, these are the steps you will generally take when creating LUIS applications:
7. Publish (https://docs.microsoft.com/en-us/azure/cognitive-
services/LUIS/publishapp)
40. Lab 6: Implementing the LUIS model
Lab 6.0: Creating the LUIS service in the portal (optional)
Creating a LUIS service in the portal is optional, as LUIS provides you with a "starter key"
that you can use for the labs. However, if you want to see how to create a free or paid
service in the portal, you can follow the steps below.
Note If you ran the pre-req ARM Template, you will already have a cognitive services
resource that included the Language Understanding APIs.
2. Click Create a resource
4. Click Create
7. For the Authoring Resource location, select a location closest to you. Not all locations are
available for this resource.
8. For the pricing tier, select Free F0
9. For your Prediction Resource location, set the option that is the same as your resource
group location
Note The Luis AI web site does not allow you to control or publish your Azure based
cognitive services resources. You will need to call the APIs in order to train and publish
them.
Let's look at how we can use LUIS to add some natural language capabilities. LUIS allows
you to map natural language utterances (words/phrases/sentences the user might say when
talking to the bot) to intents (tasks or actions the user wants to perform). For our
application, we might have several intents: finding pictures, sharing pictures, and ordering
prints of pictures, for example. We can give a few example utterances as ways to ask for
each of these things, and LUIS will map additional new utterances to each intent based on
what it has learned.
Warning: Though Azure services use IE as the default browser, we do not recommend it for
LUIS. You should be able to use Chrome or Firefox for all of the labs. Alternatively, you can
download either Microsoft Edge (https://www.microsoft.com/en-us/download/details.aspx?
id=48126) or Google Chrome (https://www.google.com/intl/en/chrome/ ).
2. Sign in using your Microsoft account. This should be the same account that you used to create
the LUIS key in the previous section.
3. Click Create a LUIS app now. You should be redirected to a list of your LUIS applications. If
prompted, click Migrate Layer.
4. If this is your first time, you will be asked to agree with service terms of use and select your
county.
On next step you need to chose recommended option and link your Azure account with LUIS.
Finally confirm your settings and you will be forwarded to the LUIS App page.
Note: Notice that there is also an "Import App" next to the "New App" button on the
current page (https://www.luis.ai/applications). After creating your LUIS application, you
have the ability to export the entire app as JSON and check it into source control. This is a
recommended best practice, so you can version your LUIS models as you version your code.
An exported LUIS app may be re-imported using that "Import App" button. If you fall
behind during the lab and want to cheat, you can click the "Import App" button and import
the LUIS model.
9. In the top navigation, select the BUILD link. Notice there is one intent called "None". Random
utterances that don't map to any of your intents may be mapped to "None".
Search/find pictures
Share pictures on social media
Order prints of pictures
Greet the user (although this can also be done other ways, as we will see later)
12. Give several examples of things the user might say when greeting the bot, pressing "Enter" after
each one.
Let's see how to create an entity. When the user requests to search the pictures, they may
specify what they are looking for. Let's capture that in an entity.
13. Click on Entities in the left-hand column and then click + Create.
16. Click Create.
17. Click Intents in the left-hand sidebar and then click the + Create button.
Just as we did for Greetings, let's add some sample utterances (words/phrases/sentences
the user might say when talking to the bot). People might search for pictures in many ways.
Feel free to use some of the utterances below, and add your own wording for how you
would ask a bot to search for pictures.
Once we have some utterances, we have to teach LUIS how to pick out the search topic as
the "facet" entity. Whatever the "facet" entity picks up is what will be searched.
19. Hover and click the word (or click consecutive words to select a group of words) and then select
the "facet" entity.
So your utterances may become something like this when facets are labeled:
Note This workshop does not include Azure search, however, this functionality has been left
in for the sake of demonstration.
20. Click Intents in the left sidebar and add two more intents:
21. Finally add some sample utterances to the "None" intent. This helps LUIS label when things are
outside the scope of your application. Add things like "I'm hungry for pizza", "Search videos", etc.
You should have about 10-15% of your app's utterances within the None intent.
TIP: Training is not always immediate. Sometimes it gets queued and can take several
minutes.
2. After training is finished, select Manage in the top toolbar. The following options will appear on
the left toolbar:
NOTE: The categories on the left pane may change as the portals are updated. As a result,
the keys and endpoints may fall under a different category than the one listed here.
Settings
Publish settings
Azure Resources
Versions
Collaborators
3. You should see a Prediction Resource and a Key resource already created. If you see
the Prediction Resource, advance to the next section on Publish the app.
5. Select your subscription, and the resource you created in the Azure portal earlier and then
select Done to connect the Language Understanding resource to the LUIS service.
Publish the app
Publishing creates an endpoint to call the LUIS model. The endpoint URL will be displayed.
Copy the endpoint URL and add it to your list of keys for future use.
8. In the top bar, select Test. Try typing a few utterances and see which intents are returned. Here
are some examples you can try:
To retrain the model for utterances with low scores, take the following steps:
10. Beside Top-scoring intent, select Assign to new intent and then in the drop-down and
choose SharePic from the list.
13. Test the Send to Tom utterance again. It should now return the SharePic intent with a higher
score.
Your LUIS app is now ready to be used by client applications, tested in a browser through
the listed endpoint, or integrated into a bot.
Going Further
If you still have time, spend time exploring the www.luis.ai site. Select "Prebuilt domains"
and see what is already available for you (https://docs.microsoft.com/en-
us/azure/cognitive-services/luis/luis-reference-prebuilt-domains ). You can also review
some of the other features (https://docs.microsoft.com/en-us/azure/cognitive-
services/luis/luis-concept-feature ) and patterns (https://docs.microsoft.com/en-
us/azure/cognitive-services/luis/luis-concept-patterns ), and check out the BotBuilder-
tools (https://github.com/Microsoft/botbuilder-tools ) for creating LUIS models, managing
LUIS models, simulating conversations, and more. Later, you may also be interested in
another course that includes how to design LUIS schema (https://aka.ms/daaia).
Extra Credit
If you wish to attempt to create a LUIS model including Azure Search, follow the training on
LUIS models including search (https://github.com/Azure/LearnAI-
Bootcamp/tree/master/lab01.5-luis ).
43. Lab 7: Integrate LUIS into Bot Dialogs
Introduction:
Our bot is now capable of taking in a user's input and responding based on the user's input.
Unfortunately, our bot's communication skills are brittle. One typo, or a rephrasing of words,
and the bot will not understand. This can cause frustration for the user. We can greatly
increase the bot's conversational abilities by enabling it to understand natural language with
the LUIS model we built in Lab 6
We will have to update our bot in order to use LUIS. We can do this by modifying
"Startup.cs" and "PictureBot.cs."
Prerequisites: This lab builds on Lab 3. It is recommended that you do that lab in order to
be able to implement logging as covered in this lab. If you have not, reading carefully
through all the exercises and looking at some of the code or using it in your own
applications may be sufficient, depending on your needs.
NOTE: If you intend to use the code in the Finished folder, you MUST replace the app
specific information with your own app IDs and endpoints.
Below:
Paste Content
services.AddSingleton((Func<IServiceProvider, PictureBotAccessors>)(sp =>
{
.
.
.
return accessors;
});
Add:
Paste Content
3. Modify the appsettings.json to include the following properties, be sure to fill them in with
your LUIS instance values:
Paste Content
"luisAppId": "",
"luisAppKey": "",
"luisEndPoint": ""
Note The Luis endpoint url for the .NET SDK should be something
like https://{region}.api.cognitive.microsoft.com with no api or version after it.
1. Open PictureBot.cs.. The first thing you'll need to do is initialize the LUIS recognizer, similar to
how you did for PictureBotAccessors. Below the commented line // Initialize LUIS
Recognizer, add the following:
Paste Content
2. Navigate to the PictureBot constructor:
Paste Content
Now, maybe you noticed we had this commented out in your previous labs, maybe you
didn't. You have it commented out now, because up until now, you weren't calling LUIS, so a
LUIS recognizer didn't need to be an input to PictureBot. Now, we are using the recognizer.
Again, this should look very similar to how we initialized the instance of _accessors.
4. In MainMenuAsync, we do want to start by trying Regex, so we'll leave most of that. However, if
Regex doesn't find an intent, we want the default action to be different. That's when we want to
call LUIS.
default:
{
await MainResponses.ReplyWithConfused(stepContext.Context);
return await stepContext.EndDialogAsync();
}
With:
Paste Content
default:
{
// Call LUIS recognizer
var result = await _recognizer.RecognizeAsync(stepContext.Context,
cancellationToken);
// Get the top intent from the results
var topIntent = result?.GetTopScoringIntent();
// Based on the intent, switch the conversation, similar concept as with Regex
above
switch ((topIntent != null) ? topIntent.Value.intent : null)
{
case null:
// Add app logic when there is no result.
await MainResponses.ReplyWithConfused(stepContext.Context);
break;
case "None":
await MainResponses.ReplyWithConfused(stepContext.Context);
// with each statement, we're adding the LuisScore, purely to test, so we
know whether LUIS was called or not
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "Greeting":
await MainResponses.ReplyWithGreeting(stepContext.Context);
await MainResponses.ReplyWithHelp(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "OrderPic":
await MainResponses.ReplyWithOrderConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "SharePic":
await MainResponses.ReplyWithShareConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "SearchPic":
await MainResponses.ReplyWithSearchConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
default:
await MainResponses.ReplyWithConfused(stepContext.Context);
break;
}
return await stepContext.EndDialogAsync();
}
Let's briefly go through what we're doing in the new code additions. First, instead of
responding saying we don't understand, we're going to call LUIS. So we call LUIS using the
LUIS Recognizer, and we store the Top Intent in a variable. We then use switch to respond
in different ways, depending on which intent is picked up. This is almost identical to what
we did with Regex.
Note If you named your intents differently in LUIS than instructed in the dode
accompanying Lab 6, you need to modify the case statements accordingly.
Another thing to note is that after every response that called LUIS, we're adding the LUIS
intent value and score. The reason is just to show you when LUIS is being called as opposed
to Regex (you would remove these responses from the final product, but it's a good
indicator for us as we test the bot).
2. Switch to your Bot Emulator. Try sending the bots different ways of searching pictures. What
happens when you say "send me pictures of water" or "show me dog pics"? Try some other ways
of asking for, sharing and ordering pictures.
If you have extra time, see if there are things LUIS isn't picking up on that you expected it to.
Maybe now is a good time to go to luis.ai, review your endpoint utterances
(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/label-
suggested-utterances), and retrain/republish your model.
Fun Aside: Reviewing the endpoint utterances can be extremely powerful. LUIS makes smart
decisions about which utterances to surface. It chooses the ones that will help it improve the
most to have manually labeled by a human-in-the-loop. For example, if the LUIS model
predicted that a given utterance mapped to Intent1 with 47% confidence and predicted that
it mapped to Intent2 with 48% confidence, that is a strong candidate to surface to a human
to manually map, since the model is very close between two intents.
Going Further
If you're having trouble customizing your LUIS implementation, review the documentation
guidance for adding LUIS to bots here (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-howto-v4-luis?view=azure-bot-service-4.0&tabs=cs ).
Get stuck or broken? You can find the solution for the lab up until this point under
code/Finished (./code/Finished). You will need to insert the keys for your Azure Bot Service
in the appsettings.json file. We recommend using this code as a reference, not as a
solution to run, but if you choose to run it, be sure to add the necessary keys (in this section,
there shouldn't be any).
Extra Credit
If you wish to attempt to integrate LUIS bot including Azure Search, building on the prior
supplementary LUIS model-with-search training (https://github.com/Azure/LearnAI-
Bootcamp/tree/master/lab01.5-luis ), follow the following trainings: Azure Search
(https://github.com/Azure/LearnAI-Bootcamp/tree/master/lab02.1-azure_search ), and
Azure Search Bots (https://github.com/Azure/LearnAI-Bootcamp/blob/master/lab02.2-
building_bots/2_Azure_Search.md ).
2. Navigate to your resource group, select the cognitive services resource that is generic (aka, it
contains all end points).
3. Under RESOURCE MANAGEMENT, select the Quick Start tab and record the url and the
key for the cognitive services resource
48. Lab 8 - Detect Language
Lab 8.2: Add language support to your bot
3. Click Browse
Paste Content
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics;
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics.Models;
using Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime;
Paste Content
services.AddSingleton(sp =>
{
string cogsBaseUrl = Configuration.GetSection("cogsBaseUrl")?.Value;
string cogsKey = Configuration.GetSection("cogsKey")?.Value;
var credentials = new ApiKeyServiceClientCredentials(cogsKey);
TextAnalyticsClient client = new TextAnalyticsClient(credentials)
{
Endpoint = cogsBaseUrl
};
return client;
});
Paste Content
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics;
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics.Models;
Paste Content
Paste Content
Paste Content
_textAnalyticsClient = analyticsClient;
Paste Content
Paste Content
switch (result.DetectedLanguages[0].Name)
{
case "English":
break;
default:
//throw error
await turnContext.SendActivityAsync($"I'm sorry, I can only understand
English. [{result.DetectedLanguages[0].Name}]");
return;
break;
}
13. Open the appsettings.json file and ensure that your cognitive services settings are entered:
Paste Content
"cogsBaseUrl": "",
"cogsKey" : ""
15. Using the Bot Emulator, send in a few phrases and see what happens:
Como Estes?
Bon Jour!
Привет
Hello
Going Further
Since we have already introduced you to LUIS in previous labs, think about what changes
you may need to make to support multiple languages using LUIS. Some helpful articles:
Resources
Example: Detect language with Text Analytics (https://docs.microsoft.com/en-
us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-
language-detection)
Quickstart: Text analytics client library for .NET (https://docs.microsoft.com/en-
us/azure/cognitive-services/text-analytics/quickstarts/csharp)
Communication directly with your bot may be required in some situations. For example, you
may want to perform functional tests with a hosted bot. Communication between your bot
and your own client application can be performed using the Direct Line
API(https://docs.microsoft.com/en-us/bot-framework/rest-api/bot-framework-rest-
direct-line-3-0-concepts).
Additionally, sometimes you may want to use other channels or customize your bot further.
In that case, you can use Direct Line to communicate with your custom bots.
Microsoft Bot Framework Direct Line bots are bots that can function with a custom client of
your own design. Direct Line bots are similar to normal bots. They just don't need to use the
provided channels. Direct Line clients can be written to be whatever you want them to be.
You can write an Android client, an iOS client, or even a console-based client application.
This hands-on lab introduces key concepts related to Direct Line API.
This lab starts from the assumption that you have built and published the bot from Lab 3. It
is recommended that you do that lab in order to be successful in the ones that follow. If you
have not, reading carefully through all the exercises and looking at some of the code or
using it in your own applications may be sufficient, depending on your needs.
We'll also assume that you've completed Lab 4, but you should be able to complete the
labs without completing the logging labs.
Over the course of the last few labs, we have collected various keys. You will need most of
them if you are using the starter project as a starting point.
Keys
Ensure that you update the appsettings.json file with all the necessary values
1. Open your PictureBot solution
4. If prompted, login using the account you have used through the labs
6. Expand the resource group and select the picture bot app service we created in Lab 3
7. Click OK
Note Depending on the path you took to get to this lab, you may need to publish a second
time otherwise you may get the echo bot service otherwise when you test below. Republish
the bot, only this time change the publish settings to remove existing files.
52. Lab 9: Test Bots in DirectLine
Lab 9.2: Setting up the Direct Line channel
2. Select the Direct Line icon (looks like a globe). You should see a Default Site displayed.
3. For the Secret keys, click Show and store one of the secret keys in Notepad, or wherever you
are keeping track of your keys.
You can read detailed instructions on enabling the Direct Line channel
(https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-
directline?view=azure-bot-service-4.0 ) and secrets and tokens
(https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-
direct-line-3-0-authentication?view=azure-bot-service-3.0#secrets-and-tokens ).
We'll create a console application to help us understand how Direct Line can allow us to
connect directly to a bot. The console client application we will create operates in two
threads. The primary thread accepts user input and sends messages to the bot. The
secondary thread polls the bot once per second to retrieve any messages from the bot, then
displays the messages received.
Note The instructions and code here have been modified from the best practices in the
documentation (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-
direct-line?view=azure-bot-service-4.0&tabs=cscreatebot%2Ccsclientapp%2Ccsrunclient ).
1. If not already open, open your PictureBot solution in Visual Studio.
2. Right-click on the solution in Solution Explorer, then select Add > New Project.
5. Select Create.
Microsoft.Bot.Connector.DirectLine
Microsoft.Rest.ClientRuntime
8. Open Program.cs
Paste Content
using System;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Bot.Connector.DirectLine;
using Newtonsoft.Json;
using Activity = Microsoft.Bot.Connector.DirectLine.Activity;
namespace PictureBotDL
{
class Program
{
// ************
// Replace the following values with your Direct Line secret and the name of
your bot resource ID.
//*************
private static string directLineSecret = "YourDLSecret";
private static string botId = "YourBotServiceName";
/// <summary>
/// Drives the user's conversation with the bot.
/// </summary>
/// <returns></returns>
private static async Task StartBotConversation()
{
// Create a new Direct Line client.
DirectLineClient client = new DirectLineClient(directLineSecret);
/// <summary>
/// Polls the bot continuously and retrieves messages sent by the bot to the
client.
/// </summary>
/// <param name="client">The Direct Line client.</param>
/// <param name="conversationId">The conversation ID.</param>
/// <returns></returns>
private static async Task ReadBotMessagesAsync(DirectLineClient client,
string conversationId)
{
string watermark = null;
/// <summary>
/// Displays the hero card on the console.
/// </summary>
/// <param name="attachment">The attachment that contains the hero
card.</param>
private static void RenderHeroCard(Attachment attachment)
{
const int Width = 70;
// Function to center a string between asterisks.
Func<string, string> contentLine = (content) => string.Format($"{{0, -
{Width}}}", string.Format("{0," + ((Width + content.Length) / 2).ToString() + "}",
content));
10. In Program.cs, update the direct line secret and bot id to your specific values.
Spend some time reviewing this sample code. It's a good exercise to make sure you
understand how we're connecting to our PictureBot and getting responses.
13. Have a conversation with the bot using the commandline application
Note If you do not get a response, it is likely you have an error in your bot. Test your bot
locally using the Bot Emulator, fix any issues, then republish it.
Quick quiz - how are we displaying the Conversation ID? We'll see in the next sections why
we might want this.
54. Lab 9: Test Bots in DirectLine
Lab 9.4: Using HTTP Get to retrieve messages
Because we have the conversation ID, we can retrieve user and bot messages using HTTP
Get. If you are familiar and experienced with Rest Clients, feel free to use your tool of choice.
In this lab, we will go through using Postman (a web-based client) to retrieve messages.
Step 2
Using Direct Line API, a client can send messages to your bot by issuing HTTP Post requests.
A client can receive messages from your bot either via WebSocket stream or by issuing
HTTP GET requests. In this lab, we will explore HTTP Get option to receive messages.
We'll be making a GET request. We also need to make sure the header contains our header
name (Authorization) and header value (Bearer YourDirectLineSecret). Addtionally, we'll
make the call to our existing conversation in the console app, by replaceing {conversationId}
with our current Conversation ID in our
request: https://directline.botframework.com/api/conversations/{conversationId}/messag
es.
2. Open Postman
3. For the type, ensure that GET is selected
7. Finally, select Send.
9. Create a new conversation with your console app and be sure to search for pictures.
10. Create a new request using Postman using the new converstation id.
11. Inspect the response returned. You should find the image url displayed within the images array of
the response
Going Further
Have extra time? Can you leverage curl download
link: https://curl.haxx.se/download.html from the terminal to retrieve conversations (like
you did for Postman)?
Hint: your command may look similar to curl -H "Authorization:Bearer {SecretKey}"
https://directline.botframework.com/api/conversations/{conversationId}/messages -XGET
Resources
Direct Line API(https://docs.microsoft.com/en-us/bot-framework/rest-api/bot-
framework-rest-direct-line-3-0-concepts)