You are on page 1of 99

2.

Lab 1: Meeting the Technical Requirements


Lab 1.1:  Setting technology, environments and keys

This lab is meant for an Artificial Intelligence (AI) Engineer or an AI Developer on Azure. To
ensure you have time to work through the exercises, there are certain requirements to meet
before starting the labs for this course.

You should ideally have some previous exposure to Visual Studio. We will be using it for
everything we are building in the labs, so you should be familiar with how to use it
(https://docs.microsoft.com/en-us/visualstudio/ide/visual-studio-ide ) to create
applications. Additionally, this is not a class where we teach code or development. We
assume you have some familiarity with C# (intermediate level - you can learn here
(https://mva.microsoft.com/en-us/training-courses/c-fundamentals-for-absolute-
beginners-16169?l=Lvld4EQIC_2706218949 ) and here (https://docs.microsoft.com/en-
us/dotnet/csharp/quick-starts/)), but you do not know how to implement solutions with
Cognitive Services.

Account Setup
 Note You can use different environments for completing this lab. Your instructor will guide you
through the necessary steps to get the environment up and running. This could be as simple as
using the computer you are logged into in the classroom or as complex as setting up a virtualized
environment. The labs were created and tested using the Azure Data Science Virtual Machine
(DSVM) on Azure and as such, will require an Azure account to use.

Setup your Azure Account

You may activate an Azure free trial at https://azure.microsoft.com/en-us/free/ .

If you have been given an Azure Pass to complete this lab, you may go
to http://www.microsoftazurepass.com/ to activate it. Please follow the instructions
at https://www.microsoftazurepass.com/howto , which document the activation process. A
Microsoft account may have one free trial on Azure and one Azure Pass associated with it,
so if you have already activated an Azure Pass on your Microsoft account, you will need to
use the free trial or use another Microsoft account.

Environment Setup
These labs are intended to be used with the .NET Framework using Visual Studio
2019 runnning on a Microsoft Windows operating system. While there is a version of Visual
Studio for Mac OS, certain features in the sample code are not supported on the Mac OS
platform. As a result, there is a hosted lab option available using a virtual machine solution.
Your instructor will have the details on using the VM solution. The original workshop was
designed to be used, and was tested with, the Azure Data Science Virtual Machine (DSVM).
Only premium Azure subscriptions can actually create a DSVM resource on Azure but the
labs can be completed with a local computer running Visual Studio 2019 and the required
software downloads listed throughout the lab steps.

Urls and Keys Needed


Over the course of this lab, we will collect a variety of Cognitive Services keys and storage
keys. You should save all of them in a text file so you can easily access them in future labs.

Keys

 Cognitive Services API Url:


 Cognitive Services API key:
 LUIS API Endpoint:
 LUIS API Key:
 LUIS API App ID:
 Bot App Name:
 Bot App ID:
 Bot App Password:
 Azure Storage Connection String:
 Cosmos DB Url:
 Cosmos DB Key:
 DirectLine Key:

Note Not all of these will be populated in this lab.

Azure Setup
In the following steps, you will configure the Azure environment for the labs that follow.

3. Lab 1: Meeting the Technical Requirements


Cognitive Services:

While the first lab focuses on the Computer Vision (https://www.microsoft.com/cognitive-


services/en-us/computer-vision-api ) Cognitive Service, Microsoft Azure allows you to
create a cognitive service account that spans all services, or you can elect to create a
cognitive service account for an individual service. In the following steps, you will create an
account that spans all cognitive services.

1. Open the Azure Portal (https://portal.azure.com)

2. Click + Create a Resource and then enter cognitive services in the search box

3. Choose Cognitive Services from the available options, then click Create

Note You can create specific cognitive services resources or you can create a single
resource that contains all the endpoints.

4. Type a name of your own choosing

5. Select your subscripton and resource group

6. For the pricing tier, select S0

7. Check the confirmation checkbox

8. Select Review + create

9. Once validation passes, select Create


10. Navigate to the new resource, under the Resource Management section in the left toolbar,
select Keys and Endpoints

11. Copy the API Key and the url of the endpoint to your notepad

4. Lab 1: Meeting the Technical Requirements


Azure Storage Account:

1. In the Azure Portal, click + Create a Resource and then enter storage in the search
box

2. Choose Storage account from the available options, then click Create

3. Select your subscription and resource group

4. Type a unique name for your account

5. For the location, select the same as your resource group

6. Performance should be Standard

7. Account kind should be StorageV2 (general purpose v2)


8. For replication, select Locally-redundant storage (LRS)

9. Click Review + create

10. Click Create

11. Navigate to the new resource, click Access Keys

12. Copy the Connection string to your notepad

13. Select Overview, then select Containers

14. Select + Container

15. For the name, type images

16. Select OK

5. Lab 1: Meeting the Technical Requirements


Cosmos DB:

1. Open the Azure Portal (https://portal.azure.com)


2. Click "+ Create a Resource" and then enter cosmos in the search box

3. Choose Azure Cosmos DB from the available options.

4. Click Create

5. Select your subscription and resource group

6. Type a unique account name such as ai100cosmosdbgo, select a location that matches your
resource group

7. Select a location that matches your resource group

8. Configure the remaining options as depicted in the above image

9. Select Review + create

10. Select Create

11. Navigate to the new resource, under Settings, select Keys


12. Copy the URI and the PRIMARY KEY to your notepad

6. Lab 1: Meeting the Technical Requirements


Bot Builder SDK:

We will use the Bot Builder template for C# to create bots in this course.

Download the Bot Builder SDK

1. Open a browser window to Bot Builder SDK v4 Template for C# here


(https://marketplace.visualstudio.com/items?itemName=BotBuilder.botbuilderv4)

2. Click Download

3. Navigate to the download folder location and double-click on the install

4. Ensure that all versions of Visual Studio are selected and click Install. If prompted, click End
Tasks.

5. Click Close. You should now have the bot templates added to your Visual Studio templates.

Bot Emulator
We will be developing a bot using the latest .NET SDK (v4). In order to do local testing, we'll
need to download the Bot Framework Emulator.

Download the Bot Framework Emulator


You can download the v4 Bot Framework Emulator for testing your bot locally. The
instructions for the rest of the labs will assume you've downloaded the v4 Emulator.

1. Download the emulator by going to this page


(https://github.com/Microsoft/BotFramework-Emulator/releases) and downloading the
most recent version of the emulator that has the tag "4.5.1" or higher (select the "*-windows-
setup.exe" file, if you are using windows).

Note The emulator installs to "C:\Users\_your-


username\AppData\Local\Programs\@bfemulatormain\Bot Framework Emulator.exe" , but you
can gain access to it through the start menu by searching for bot framework.

Credits
Labs in this series were adapted from the Cognitive Services
Tutorial (https://github.com/noodlefrenzy/CognitiveServicesTutorial ) and Learn AI
Bootcamp https://github.com/Azure/LearnAI-Bootcamp )

Resources

To deepen your understanding of the architecture described here, and to involve your
broader team in the development of AI solutions, we recommend reviewing the following
resources:

 Cognitive Services (https://www.microsoft.com/cognitive-services) - AI Engineer


 Cosmos DB (https://docs.microsoft.com/en-us/azure/cosmos-db/) - Data Engineer
 Azure Search (https://azure.microsoft.com/en-us/services/search/) - Search Engineer
 Bot Developer Portal (http://dev.botframework.com) - AI Engineer

7. Lab 2 - Implement Computer Vision


Introduction:

We're going to build an end-to-end application that allows you to pull in your own pictures,
use Cognitive Services to obtain a caption and some tags about the images. In later labs, we
will build a Bot Framework bot using LUIS to allow easy, targeted querying of such images.

Lab 2.0 Objectives


In this lab, you will:

 Learn about the various Cognitive Services APIs


 Understand how to configure your apps to call Cognitive Services
 Build an application that calls various Cognitive Services APIs (specifically Computer Vision) in
.NET applications

While there is a focus on Cognitive Services, you will also leverage Visual Studio 2019,
Community Edition.

Note if you have not already, follow the directions for creating your Azure account,
Cognitive Services, and getting your api keys in Lab1-
Technical_Requirements.md (../Lab1-Technical_Requirements/02-
Technical_Requirements.md).

8. Lab 2 - Implement Computer Vision


Lab 2.1:  Architecture

We will build a simple C# application that allows you to ingest pictures from your local
drive, then invoke the Computer Vision API (https://www.microsoft.com/cognitive-
services/en-us/computer-vision-api ) to analyze the images and obtain tags and a
description.

In the continuation of this lab throughout the course, we'll show you how to query your
data, and then build a Bot Framework (https://dev.botframework.com/) bot to query it.
Finally, we'll extend this bot with LUIS (https://www.microsoft.com/cognitive-services/en-
us/language-understanding-intelligent-service-luis ) to automatically derive intent from
your queries and use those to direct your searches intelligently.

1. Open Edge and navigate to https://dotnet.microsoft.com/download/dotnet-core/2.2.

2. Under SDK 2.2.402, download and run the SDK's Windows Installer by selecting x64.

9. Lab 2 - Implement Computer Vision


Lab 2.2:  Resources

There are some directories in the main (./Lab2-Implement_Computer_Vision) github repo


folder:
Paste Content 

- **Starter**:A starter project, which you can use if you want to learn about
creating the code that is used in the project.

- **Finished**: A finished project that you will make use of to implement computer
vision and work with the images in this lab.

Each folder contains a solution (.sln) that has several different projects for the lab, let's take
a high level look at them.

10. Lab 2 - Implement Computer Vision


Lab 2.3:  Image Processing

Cognitive Services

Cognitive Services can be used to infuse your apps, websites and bots with algorithms to
see, hear, speak, understand, and interpret your user needs through natural methods of
communication.

There are five main categories for the available Cognitive Services:

 Vision: Image-processing algorithms to identify, caption and moderate your pictures


 Knowledge: Map complex information and data in order to solve tasks such as intelligent
recommendations and semantic search
 Language: Allow your apps to process natural language with pre-built scripts, evaluate
sentiment and learn how to recognize what users want
 Speech: Convert spoken audio into text, use voice for verification, or add speaker recognition to
your app
 Search: Add Bing Search APIs to your apps and harness the ability to comb billions of webpages,
images, videos, and news with a single API call

You can browse all of the specific APIs in the Services Directory
(https://azure.microsoft.com/en-us/services/cognitive-services/directory/ ).
Let's talk about how we're going to call Cognitive Services in our application by reviewing
the sample code in the Finished project.

Image Processing Library

1. Open the code/Finished/ImageProcessing.sln  solution

Within your ImageProcessing solution you'll find the ProcessingLibrary project. It serves as a


wrapper around several services. This specific PCL contains some helper classes (in the
ServiceHelpers folder) for accessing the Computer Vision API and an "ImageInsights" class
to encapsulate the results.

You should be able to pick up this portable class library and drop it in your other projects
that involve Cognitive Services (some modification will be required depending on which
Cognitive Services you want to use).

ProcessingLibrary: Service Helpers


Service helpers can be used to make your life easier when you're developing your app. One
of the key things that service helpers do is provide the ability to detect when the API calls
return a call-rate-exceeded error and automatically retry the call (after some delay). They
also help with bringing in methods, handling exceptions, and handling the keys.

You can find additional service helpers for some of the other Cognitive Services within the
Intelligent Kiosk sample application (https://github.com/Microsoft/Cognitive-Samples-
IntelligentKiosk/tree/master/Kiosk/ServiceHelpers ). Utilizing these resources makes it
easy to add and remove the service helpers in your future projects as needed.

ProcessingLibrary: The "ImageInsights" class

1. In the ProcessingLibrary project, navigate to the ImageInsights.cs file.

You can see that we're calling for Caption and Tags from the images, as well as a
unique ImageId. "ImageInsights" pieces only the information we want together from the
Computer Vision API (or from Cognitive Services, if we choose to call multiple).
Now let's take a step back for a minute. It isn't quite as simple as creating
the ImageInsights class and copying over some methods/error handling from service
helpers. We still have to call the API and process the images somewhere. For the purpose of
this lab, we are going to walk through ImageProcessor.cs to understand how it is being
used. In future projects, feel free to add this class to your PCL and start from there (it will
need modification depending what Cognitive Services you are calling and what you are
processing - images, text, voice, etc.).

11. Lab 2 - Implement Computer Vision


Lab 2.4:  Review ImageProcessor.cs

1. Navigate to ImageProcessor.cs within ProcessingLibrary.

2. Note the following using directives (https://docs.microsoft.com/en-


us/dotnet/csharp/language-reference/keywords/using-directive) to the top of the
class, above the namespace:

Paste Content 

using System;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.ProjectOxford.Vision;
using ServiceHelpers;

Project Oxford (https://blogs.technet.microsoft.com/machinelearning/tag/project-


oxford/) was the project where many Cognitive Services got their start. As you can see, the
NuGet Packages were even labeled under Project Oxford. In this scenario, we'll
call Microsoft.ProjectOxford.Vision for the Computer Vision API. Additionally, we'll
reference our service helpers (remember, these will make our lives easier). You'll have to
reference different packages depending on which Cognitive Services you're leveraging in
your application.
1. In ImageProcessor.cs we start by utliziling a method to process the
image, ProcessImageAsync. The code will utilize asynchronous processing because it will utilize
services to perform the actions.

Paste Content 

public static async Task<ImageInsights> ProcessImageAsync(Func<Task<Stream>>


imageStreamCallback, string imageId)
{
// Set up an array that we'll fill in over the course of the processor:
VisualFeature[] DefaultVisualFeaturesList = new VisualFeature[]
{ VisualFeature.Tags, VisualFeature.Description };

// Call the Computer Vision service and store the results in imageAnalysisResult:
var imageAnalysisResult = await
VisionServiceHelper.AnalyzeImageAsync(imageStreamCallback,
DefaultVisualFeaturesList);

// Create an entry in ImageInsights:


ImageInsights result = new ImageInsights
{
ImageId = imageId,
Caption = imageAnalysisResult.Description.Captions[0].Text,
Tags = imageAnalysisResult.Tags.Select(t => t.Name).ToArray()
};

// Return results:
return result;
}

In the above code, we use Func<Task<Stream>> because we want to make sure we can


process the image multiple times (once for each service that needs it), so we have a Func
that can hand us back a way to get the stream. Since getting a stream is usually an async
operation, rather than the Func handing back the stream itself, it hands back a task that
allows us to do so in an async fashion.

In ImageProcessor.cs, within the ProcessImageAsync method, we're going to set up a static


array https://stackoverflow.com/questions/4594850/definition-of-static-arrays  that we'll
fill in throughout the processor. As you can see, these are the main attributes we want to
call for ImageInsights.cs.

3. Next, we want to call the Cognitive Service (specifically Computer Vision) and put the results
in imageAnalysisResult.

4. We use the code below to call the Computer Vision API (with the help
of VisionServiceHelper.cs) and store the results in imageAnalysisResult. Near the bottom
of VisionServiceHelper.cs, you will want to review the available methods for you to call
(RunTaskWithAutoRetryOnQuotaLimitExceededError, DescribeAsync, AnalyzeImageAsync, 
RecognizeTextAsyncYou). You will use the AnalyzeImageAsync method in order to return the
visual features.

Paste Content 

var imageAnalysisResult = await


VisionServiceHelper.AnalyzeImageAsync(imageStreamCallback,
DefaultVisualFeaturesList);

Now that we've called the Computer Vision service, we want to create an entry in
"ImageInsights" with only the following results: ImageId, Caption, and Tags (you can confirm
this by revisiting ImageInsights.cs).

5. The following code below accomplishes this.

Paste Content 

ImageInsights result = new ImageInsights


{
ImageId = imageId,
Caption = imageAnalysisResult.Description.Captions[0].Text,
Tags = imageAnalysisResult.Tags.Select(t => t.Name).ToArray()
};

So now we have the caption and tags that we need from the Computer Vision API, and each
image's result (with imageId) is stored in "ImageInsights".
6. Lastly, we need to close out the method by using the following line at the end of the method:

Paste Content 

return result;

7. In order to use this application, we need to build the project, press Ctrl-Shift-B, of select
the Build menu and choose Build Solution.

8. Work with your instructor to fix any errors.

Credits
This lab was modified from this Cognitive Services Tutorial
(https://github.com/noodlefrenzy/CognitiveServicesTutorial ).

Resources
 Computer Vision API (https://www.microsoft.com/cognitive-services/en-us/computer-
vision-api)
 Bot Framework (https://dev.botframework.com/)
 Services Directory (https://azure.microsoft.com/en-us/services/cognitive-
services/directory/)
 Portable Class Library (PCL) (https://docs.microsoft.com/en-us/dotnet/standard/cross-
platform/cross-platform-development-with-the-portable-class-library)
 Intelligent Kiosk sample application (https://github.com/Microsoft/Cognitive-Samples-
IntelligentKiosk/tree/master/Kiosk/ServiceHelpers)

12. Lab 2 - Implement Computer Vision


Exploring Cosmos DB:

Azure Cosmos DB is Microsoft's resilient NoSQL PaaS solution and is incredibly useful for
storing loosely structured data like we have with our image metadata results. There are
other possible choices (Azure Table Storage, SQL Server), but Cosmos DB gives us the
flexibility to evolve our schema freely (like adding data for new services), query it easily, and
can be quickly integrated into Azure Cognitive Search (which we'll do in a later lab).

13. Lab 2 - Implement Computer Vision


Lab 2.5 (optional):  Understanding CosmosDBHelper
Cosmos DB is not a focus of this lab, but if you're interested in what's going on - here are
some highlights from the code we will be using

1. Navigate to the CosmosDBHelper.cs class in the ImageStorageLibrary project. Review the


code and the comments. Many of the implementations used can be found in the Getting
Started guide (https://docs.microsoft.com/en-us/azure/cosmos-db/documentdb-
get-started).

2. Go to the TestCLI project's Util.cs file and review the ImageMetadata class (code and


comments). This is where we turn the ImageInsights we retrieve from Cognitive Services into
appropriate Metadata to be stored into Cosmos DB.

 Finally, look in Program.cs in TestCLI and at ProcessDirectoryAsync. First, we check if


the image and metadata have already been uploaded - we can use CosmosDBHelper to
find the document by ID and to return null if the document doesn't exist. Next, if
we've set forceUpdate or the image hasn't been processed before, we'll call the
Cognitive Services using ImageProcessor from the ProcessingLibrary and retrieve
the ImageInsights, which we add to our current ImageMetadata.
 Once all of that is complete, we can store our image - first the actual image into Blob
Storage using our BlobStorageHelper instance, and then the ImageMetadata into
Cosmos DB using our CosmosDBHelper instance. If the document already existed
(based on our previous check), we should update the existing document. Otherwise,
we should be creating a new one.

14. Lab 2 - Implement Computer Vision


Lab 2.6:  Loading Images using TestCLI
We will implement the main processing and storage code as a command-line/console
application because this allows you to concentrate on the processing code without having to
worry about event loops, forms, or any other UX related distractions. Feel free to add your own
UX later.

1. In the TestCLI project, open the settings.json file


Add your specific environment settings from L2. ab1-Technical_Requirements.md
Note the url for cognitive services should end with /vision/v1.0 for the project oxford apis.
For example https://westus2.api.cognitive.microsoft.com/vision/v1.0 .

3. If you have not already done so, compile the project

4. Open a command prompt and navigate to the build directory for the TestCLI project. It should
something like {GitHubDir} \Lab2-
Implement_Computer_Vision\code\Finished\TestCLI .

NOTE Do not navigate to the debug directory


NOTE .net core 2.2 is requred installation can be find
here https://dotnet.microsoft.com/download/dotnet-core/2.2 .

5. Run command dotnet run

Paste Content 

Usage: [options]
Options:
-force Use to force update even if file has already been added.
-settings The settings file (optional, will use embedded resource
settings.json if not set)
-process The directory to process
-query The query to run
-? | -h | --help Show help information
```

By default, it will load your settings from `settings.json` (it builds it into the
`.exe`), but you can provide your own using the `-settings` flag. To load images (and
their metadata from Cognitive Services) into your cloud storage, you can just tell
_TestCLI_ to `-process` your image directory as follows:

```cmd
dotnet run -- -process <%GitHubDir%>\Lab2-Implement_Computer_Vision\sample_images
```

> **Note** Replace the <%GitHubDir%> value with the folder where you cloned the
repository.
Once it's done processing, you can query against your Cosmos DB directly using
_TestCLI_ as follows:

```cmd
dotnet run -- -query "select * from images"
```

Take some time to look through the sample images (you can find them in
/sample_images) and compare the images to the results in your application.

> **Note** You can also browse the results in the CosmosDb resource in Azure. Open
the resource, then select **Data Explorer**. Expand the **metadata** database, then
select the **items** node. You will see several json documents that contains your
results.

15. Lab 3: Creating a Basic filtering bot


Introduction:

Every new technology brings with it as many opportunities as questions, and AI-powered
technology has its own unique considerations. Be mindful of the following AI Ethics
principles when designing and implementing your AI tools:

1. Fairness: Maximize efficiencies without destroying dignity

2. Accountability: AI must be accountable for its algorithm

3. Transparency: Guard against bias and destruction of human dignity


4. Ethical Application: AI must assist humanity and be designed for intelligent privacy

We encourage you to read more(https://ai-ethics.azurewebsites.net/ ) about the Ethical


considerations when building intelligent apps.

Pre-requisities

1. Follow the directions provided in Lab1-Technical_Requirements.md(../Lab1-


Technical_Requirements/02-Technical_Requirements.md) to download the v4 Bot
Framework Emulator to enable you to test your bot locally.

16. Lab 3: Creating a Basic filtering bot


Lab 3.0:  Create an Azure Web App Bot

A bot created using the Microsoft Bot Framework can be hosted at any publicly-accessible
URL. For the purposes of this lab, we will register our bot using Azure Bot Service
(https://docs.microsoft.com/en-us/bot-framework/bot-service-overview-introduction ).

1. Navigate to the Azure Portal (https://portal.azure.com).

2. In the portal, navigate to your resource group, then click +Add and search for bot.

3. Select Web App Bot, and click Create.

4. For the name, you'll have to create a unique identifier. We recommend using something along the
lines of PictureBot[i][n] where [i] is your initials and [n] is a number (e.g. mine would be
PictureBotamt6).
5. Select a Location

6. For pricing tier, select F0.

7. Select the Bot template area

8. Select C#, then select Echo Bot, later we will update it to our our PictureBot.

9. Click OK, make sure that Echo Bot is displayed.

10. Configure a new App service plan (put it in the same location as your bot)

11. You can choose to turn Application Insights on or off.

12. Do not change or click on Auto create App ID and password, we will get to that later.

13. Click Create

14. When it's deployed, search for App registrations in the Search resources, services,
and docs search bar at the top of the page.
15. Select Applications from personal account and click the new Web App Bot Resource.

16. Under Manage, click Certificates & secrets

17. Click New client secret

18. For the name, type PictureBot

19. For the expires, select Never

20. Click Add

21. Record the secret into notepad or similar for later us in the lab(s).

22. Navigate to the Azure Web App Bot Resource you created, record the application id into notepad
or similar for later us in the lab(s).

23. Navigate back to the web app bot resource, select the Test in Web Chat tab
24. Once it starts, explore what it is capable of doing. As you will see, it only echos back your
message.

17. Lab 3: Creating a Basic filtering bot


Lab 3.1:  Creating a simple bot and running it

1. Open Visual Studio 2019 or later

2. Select Create new project, search for Echo.

3. Scroll down until you see Echo Bot (Bot Framework v4)

4. Click Next

Note If you do not see the Echo Bot template, you need to install the Visual Studio addins
from the pre-req steps.

5. For the name, type PictureBotProj, select Create

6. Spend some time looking at all of the different things that are generated from the Echo Bot
template. We won't spend time explaining every single file, but we highly
recommend spending some time later working through and reviewing this sample (and the
other Web App Bot sample - Basic Bot), if you have not already. It contains important and useful
shells needed for bot development. You can find it and other useful shells and samples here
(https://github.com/Microsoft/BotBuilder-Samples).
7. Start by right-clicking on the Solution and selecting Build. This will restore the nuget packages.

8. Open the appsettings.json file, update it by adding your bot service information you recorded
above:

Paste Content 

{
"MicrosoftAppId": "YOURAPPID",
"MicrosoftAppPassword": "YOURAPPSECRET"
}

9. As you may know, renaming a Visual Studio Solution/Project is a very sensitive


task. Carefully complete the following tasks so all the names reflect PictureBot instead of
EchoBot:

10. Right-click the Bots/Echobot.cs file, then select Rename, rename the class file


to PictureBot.cs

11. Rename the class and then change all references to the class to PictureBot. You will know if you
missed one when you attempt to build the project.

12. Right-click the project, select Manage Nuget Packagaes

13. Select the Browse tab, and install the following packages, ensure that you are using
version 4.6.3: This is accomplished by single-clicking the entry in the left pane and then using the
drop-down in the right to select the correct version.

 Microsoft.Bot.Builder.Azure
 Microsoft.Bot.Builder.AI.Luis
 Microsoft.Bot.Builder.Dialogs
 Microsoft.Azure.Search (version, 9.1.0 or later)

14. Build the solution.

TIP: If you only have one monitor and you would like to easily switch between instructions
and Visual Studio, you can now add the instruction files to your Visual Studio solution by
right-clicking on the project in Solution Explorer and selecting Add > Existing Item.
Navigate to "Lab2," and add all the files of type "MD File."

18. Lab 3: Creating a Basic filtering bot


Creating a Hello World bot:

So now that we've updated our base shell to support the naming and NuGet packages we'll
use throughout the rest of the labs, we're ready to start adding some custom code. First,
we'll just create a simple "Hello world" bot that helps you get warmed up to building bots
with the V4 SDK.

An important concept is the turn, used to describe a message to a user and a response from
the bot.

For example, if I say "Hello bot" and the bot responds "Hi, how are you?" that is one turn.
Check in the image below how a turn goes through the multiple layers of a bot application.

1. Open the PictureBot.cs file.

2. Review the OnMessageActivityAsync method with the code below. This method is called every
turn of the conversation. You'll see later why that fact is important, but for now, remember that
OnMessageActivityAsync is called on every turn.
3. Press F5 to start debugging.

A few things to Note


 Your default.htm (under wwwroot) page will be displayed in a browser
 Note the localhost port number for the web page. This should (and must) match the
endpoint in your Emulator.

Get stuck or broken? You can find the solution for the lab up until this point under
{GitHubPath}/code/Finished/PictureBot-Part0(./code/Finished/PictureBot-Part0). The
readme file within the solution (once you open it) will tell you what keys you need to add in
order to run the solution.

19. Lab 3: Creating a Basic filtering bot


Using the Bot Framework Emulator:

To interact with your bot:


 Launch the Bot Framework Emulator (note we are using the v4 Emulator). Click Start,
then search for Bot Emulator.

 On the welcome page, click Create a new bot configuration


 For the name, type PictureBot
 Enter the url that is displayed on your bot web page
 Enter the AppId and the App Secret your entered into the appsettings.json
Note If you do not enter id and secret values into the bot settings you would also not need
to enter the values in the bot emulator

 Click Save and connect, then save your .bot file locally


 You should now be able to converse with the bot.

 Type hello. The bot will respond with echoing your message similar to the Azure bot
we created earlier.

Note You can select "Restart conversation" to clear the conversation history.

In the Log, you should see something similar to the following:


Note how it says we will bypass ngrok for local addresses. We will not be using ngrok in this
workshop, but we would if we were connecting to our published version of the bot, we
would do so via the 'production' endpoint. Open the 'production' endpoint and observe the
difference between bots in different environments. This can be a useful feature when you're
testing and comparing your development bot to your production bot.

You can read more about using the Emulator here (https://docs.microsoft.com/en-
us/azure/bot-service/bot-service-debug-emulator?view=azure-bot-service-4.0 ).

1. Browse around and examine the sample bot code. In particular:

 Startup.cs: is where we will add services/middleware and configure the HTTP request
pipeline. There are many comments within to help you understand what is
happening. Spend a few minutes reading through.

 PictureBot.cs: The OnMessageActivityAsync method is the entry point which waits for


a message from the user is where we can react to a message once received and wait
for further messages. We can use turnContext.SendActivityAsync to send a message
from the bot back to the user.

20. Lab 3: Creating a Basic filtering bot


Lab 3.2:  Managing state and services

1. Navigate again to the Startup.cs file

2. Update the list of using statements by adding the following:

Paste Content 

using System;
using System.Linq;
using System.Text.RegularExpressions;
using Microsoft.Bot.Builder.Integration;
using Microsoft.Bot.Configuration;
using Microsoft.Bot.Connector.Authentication;
using Microsoft.Extensions.Options;
using Microsoft.Extensions.Logging;
using PictureBotProj;
using Microsoft.Bot.Builder.AI.Luis;
using Microsoft.Bot.Builder.Dialogs;

We won't use all of the above namespaces just yet, but can you guess when we might?

3. In the Startup.cs class, focus your attention on the ConfigureServices method which is used


to add services to the bot. Review the contents carefully, noting what is built in for you.

A few other notes for a deeper understanding:

 If you're unfamiliar with dependency injection, you can read more about it
here (https://docs.microsoft.com/en-us/aspnet/web-
api/overview/advanced/dependency-injection ).
 You can use local memory for this lab and testing purposes. For
production, you'll have to implement a way to manage state data
(https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-storage-
concept?view=azure-bot-service-4.0 ). In the big chunk of comments
within ConfigureServices, there are some tips for this.
 At the bottom of the method, you may notice we create and register state
accessors. Managing state is a key in creating intelligent bots, which you
can read more about here (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-dialog-state?view=azure-bot-service-4.0 ).

Fortunately, this shell is pretty comprehensive, so we only have to add two items:

 Middleware
 Custom state accessors.

21. Lab 3: Creating a Basic filtering bot


Middleware:

Middleware is simply a class or set of classes that sit between the adapter and your bot
logic, and are added to your adapter's middleware collection during initialization.
The SDK allows you to write your own middleware or add reusable components of
middleware created by others. Every activity coming in or out of your bot flows through
your middleware. We'll get deeper into this later in the lab, but for now, it's important to
understand that every activity flows through your middleware, because it is located in
the ConfigureServices method that gets called at run time (which runs in between every
message being sent by a user and OnMessageActivityAsync).

1. Add a new folder called Middleware by right-clicking the project name, in Solution Explorer,
and selecting Add and then New Folder

2. Right-click on the Middleware folder and select Add>Existing Item.

3. Navigate to {GitHubDir}\Lab3-Basic_Filter_Bot\code\Middleware, select all three files,


and select Add

4. Add the following variables to your Startup.cs class file directly under the

Paste Content 

public class Startup


{

line of code. (line 32 in the code file):


Paste Content 

private ILoggerFactory _loggerFactory;


private bool _isProduction = false;

5. Replace the following code in the public void ConfigurationServices(IServiceCollection


services) method:

Paste Content 
services.AddTransient<IBot, PictureBot.Bots.PictureBot>();

with the following code:


Paste Content 

services.AddBot<PictureBot.Bots.PictureBot>(options =>
{
var appId = Configuration.GetSection("MicrosoftAppId")?.Value;
var appSecret =
Configuration.GetSection("MicrosoftAppPassword")?.Value;

options.CredentialProvider = new SimpleCredentialProvider(appId,


appSecret);

// Creates a logger for the application to use.


ILogger logger = _loggerFactory.CreateLogger<PictureBot.Bots.PictureBot>();

// Catches any errors that occur during a conversation turn and logs them.
options.OnTurnError = async (context, exception) =>
{
logger.LogError($"Exception caught : {exception}");
await context.SendActivityAsync("Sorry, it looks like something went
wrong.");
};

// The Memory Storage used here is for local bot debugging only. When the bot
// is restarted, everything stored in memory will be gone.
IStorage dataStore = new MemoryStorage();

// For production bots use the Azure Blob or


// Azure CosmosDB storage providers. For the Azure
// based storage providers, add the Microsoft.Bot.Builder.Azure
// Nuget package to your solution. That package is found at:
// https://www.nuget.org/packages/Microsoft.Bot.Builder.Azure/
// Uncomment the following lines to use Azure Blob Storage
// //Storage configuration name or ID from the .bot file.
// const string StorageConfigurationId = "<STORAGE-NAME-OR-ID-FROM-BOT-FILE>";
// var blobConfig = botConfig.FindServiceByNameOrId(StorageConfigurationId);
// if (!(blobConfig is BlobStorageService blobStorageConfig))
// {
// throw new InvalidOperationException($"The .bot file does not contain an
blob storage with name '{StorageConfigurationId}'.");
// }
// // Default container name.
// const string DefaultBotContainer = "botstate";
// var storageContainer =
string.IsNullOrWhiteSpace(blobStorageConfig.Container) ? DefaultBotContainer :
blobStorageConfig.Container;
// IStorage dataStore = new
Microsoft.Bot.Builder.Azure.AzureBlobStorage(blobStorageConfig.ConnectionString,
storageContainer);

// Create Conversation State object.


// The Conversation State object is where we persist anything at the
conversation-scope.
var conversationState = new ConversationState(dataStore);

options.State.Add(conversationState);

var middleware = options.Middleware;


// Add middleware below with "middleware.Add(...."
// Add Regex below

});

6. Replace the public void Configure(IApplicationBuilder app, IHostingEnvironment


env, ILoggerFactory loggerFactory) method with the following code:

Paste Content 
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
_loggerFactory = loggerFactory;

app.UseDefaultFiles()
.UseStaticFiles()
.UseBotFramework();

app.UseMvc();
}

22. Lab 3: Creating a Basic filtering bot


Custom state accessors:

Before we talk about the custom state accessors that we need, it's important to back up.
Dialogs, which we'll really get into in the next section, are an approach to implementing
multi-turn conversation logic, which means they'll need to rely on a persisted state to know
where in the conversation the users are. In our dialog-based bot, we'll use a DialogSet to
hold the various dialogs. The DialogSet is created with a handle to an object called an
"accessor".

In the SDK, an accessor implements the IStatePropertyAccessor interface, which basically


means it provides the ability to get, set, and delete information regarding state, so we can
keep track of which step a user is in a conversation.

For each accessor we create, we have to first give it a property name. For our scenario, we
want to keep track of a few things:

PictureState

 Have we greeted the user yet?


 We don't want to greet them more than once, but we want to make sure we greet them at the
beginning of a conversation.
 Is the user currently searching for a specific term? If so, what is it?
 We need to keep track of if the user has told us what they want to search for, and what it is they
want to search for if they have.

DialogState

 Is the user currently in the middle of a dialog?


 This is what we'll use to determine where a user is in a given dialog or conversation flow. If you
aren't familiar with dialogs, don't worry, we'll get to that soon.

We can use these constructs to keep track of what we'll call PictureState.

1. In the ConfigureServices method of the Startup.cs file, add the following line of


code PictureState =
conversationState.CreateProperty<PictureState>(PictureBotAccessors.PictureStat
eName), within the list of custom state accessors and to keep track of the dialogs, you'll use the
built-in DialogState:

Paste Content 

// Create and register state accesssors.


// Acessors created here are passed into the IBot-derived class on every turn.
services.AddSingleton<PictureBotAccessors>(sp =>
{
var options = sp.GetRequiredService<IOptions<BotFrameworkOptions>>().Value;
if (options == null)
{
throw new InvalidOperationException("BotFrameworkOptions must be configured
prior to setting up the state accessors");
}

var conversationState =
options.State.OfType<ConversationState>().FirstOrDefault();
if (conversationState == null)
{
throw new InvalidOperationException("ConversationState must be defined and
added before adding conversation-scoped state accessors.");
}
// Create the custom state accessor.
// State accessors enable other components to read and write individual
properties of state.
var accessors = new PictureBotAccessors(conversationState)
{
PictureState =
conversationState.CreateProperty<PictureState>(PictureBotAccessors.PictureStateName),
DialogStateAccessor =
conversationState.CreateProperty<DialogState>("DialogState"),
};

return accessors;
});

You should see an error (red squiggly) beneath some of the terms. But before fixing them,
you may be wondering why we had to create two accessors, why wasn't one enough?

 DialogState is a specific accessor that comes from


the Microsoft.Bot.Builder.Dialogs library. When a message is sent, the Dialog subsystem
will call CreateContext on the DialogSet. Keeping track of this context requires
the DialogState accessor specifically to get the appropriate dialog state JSON.
 On the other hand, PictureState will be used to track particular conversation properties that we
specify throughout the conversation (e.g. Have we greeted the user yet?)

Don't worry about the dialog jargon yet, but the process should make sense. If you're
confused, you can dive deeper into how state works (https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-dialog-state?view=azure-bot-service-4.0 ).

Now back to the errors you're seeing. You've said you're going to store this information, but
you haven't yet specified where or how. We have to update "PictureState.cs" and
"PictureBotAccessor.cs" to have and access the information we want to store.

2. Right-click the project and select Add->Class, select a Class file and name it PictureState

3. Copy the following code into PictureState.cs.


Paste Content 

using System.Collections.Generic;

namespace Microsoft.PictureBot
{
/// <summary>
/// Stores counter state for the conversation.
/// Stored in <see cref="Microsoft.Bot.Builder.ConversationState"/> and
/// backed by <see cref="Microsoft.Bot.Builder.MemoryStorage"/>.
/// </summary>
public class PictureState
{
/// <summary>
/// Gets or sets the number of turns in the conversation.
/// </summary>
/// <value>The number of turns in the conversation.</value>
public string Greeted { get; set; } = "not greeted";
public string Search { get; set; } = "";
public string Searching { get; set; } = "no";
}
}

4. Review the code. This is where we'll store information about the active conversation. Feel free to
add some comments explaining the purposes of the strings. Now that you have PictureState
appropriately initialized, you can create the PictureBotAccessor, to remove the errors you were
getting in Startup.cs.

5. Right-click the project and select Add->Class, select a Class file and name
it PictureBotAccessors
6. Copy the following into it:

Paste Content 

using System;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Builder.Dialogs;

namespace Microsoft.PictureBot
{
/// <summary>
/// This class is created as a Singleton and passed into the IBot-derived
constructor.
/// - See <see cref="PictureBot"/> constructor for how that is injected.
/// - See the Startup.cs file for more details on creating the Singleton that
gets
/// injected into the constructor.
/// </summary>
public class PictureBotAccessors
{
/// <summary>
/// Initializes a new instance of the <see cref="PictureBotAccessors"/>
class.
/// Contains the <see cref="ConversationState"/> and associated <see
cref="IStatePropertyAccessor{T}"/>.
/// </summary>
/// <param name="conversationState">The state object that stores the
counter.</param>
public PictureBotAccessors(ConversationState conversationState)
{
ConversationState = conversationState ?? throw new
ArgumentNullException(nameof(conversationState));
}

/// <summary>
/// Gets the <see cref="IStatePropertyAccessor{T}"/> name used for the <see
cref="CounterState"/> accessor.
/// </summary>
/// <remarks>Accessors require a unique name.</remarks>
/// <value>The accessor name for the counter accessor.</value>
public static string PictureStateName { get; } =
$"{nameof(PictureBotAccessors)}.PictureState";

/// <summary>
/// Gets or sets the <see cref="IStatePropertyAccessor{T}"/> for
CounterState.
/// </summary>
/// <value>
/// The accessor stores the turn count for the conversation.
/// </value>
public IStatePropertyAccessor<PictureState> PictureState { get; set; }

/// <summary>
/// Gets the <see cref="ConversationState"/> object for the conversation.
/// </summary>
/// <value>The <see cref="ConversationState"/> object.</value>
public ConversationState ConversationState { get; }

/// <summary> Gets the IStatePropertyAccessor{T} name used for the


DialogState accessor. </summary>
public static string DialogStateName { get; } =
$"{nameof(PictureBotAccessors)}.DialogState";

/// <summary> Gets or sets the IStatePropertyAccessor{T} for DialogState.


</summary>
public IStatePropertyAccessor<DialogState> DialogStateAccessor { get; set; }
}
}
7. Review the code, notice the implementation of PictureStateName and PictureState.

8. Wondering if you configured it correctly? Return to Startup.cs and confirm your errors around
creating the custom state accessors have been resolved.

23. Lab 3: Creating a Basic filtering bot


Lab 3.3:  Organizing code for bots

There are many different methods and preferences for developing bots. The SDK allows you
to organize your code in whatever way you want. In these labs, we'll organize our
conversations into different dialogs, and we'll explore a MVVM style
(https://msdn.microsoft.com/en-us/library/hh848246.aspx ) of organizing code around
conversations.

This PictureBot will be organized in the following way:

 Dialogs - the business logic for editing the models


 Responses - classes which define the outputs to the users
 Models - the objects to be modified

1. Create two new folders "Responses" and "Models" within your project. (Hint: You can right-
click on the project and select "Add->Folder").

24. Lab 3: Creating a Basic filtering bot


Dialogs:

You may already be familiar with Dialogs and how they work. If you aren't, read this page on
Dialogs (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-dialog-manage-
conversation-flow?view=azure-bot-service-4.0&tabs=csharp ) before continuing.

When a bot has the ability to perform multiple tasks, it is nice to be able to have multiple
dialogs, or a set of dialogs, to help users navigate through different conversation flows. For
our PictureBot, we want our users to be able to go through an initial menu flow, often
referred to as a main dialog, and then branch off to different dialogs depending what they
are trying to do - search pictures, share pictures, order pictures, or get help. We can do this
easily by using a dialog container or what's referred to here as a DialogSet. Read about
creating modular bot logic and complex dialogs( https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-compositcontrol?view=azure-bot-service-
4.0&tabs=csharp) before continuing.

For the purposes of this lab, we are going to keep things fairly simple, but after, you should
be able to create a dialog set with many dialogs. For our PictureBot, we'll have two main
dialogs:
 MainDialog - The default dialog the bot starts out with. This dialog will start other
dialog(s) as the user requests them. This dialog, as it is the main dialog for the dialog
set, will be responsible for creating the dialog container and redirecting users to
other dialogs as needed.

 SearchDialog - A dialog which manages processing search requests and returning


those results to the user. Note  We will evoke this functionality, but will not
implement Search in this workshop.
Since we only have two dialogs, we can keep it simple and put them in the PictureBot class.
However, complex scenarios may require splitting them out into different dialogs in a folder
(similar to how we'll separate Responses and Models).

1. Navigate back to PictureBot.cs and replace your using statements with the following:

Paste Content 

using System.Threading;
using System.Threading.Tasks;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Schema;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Extensions.Logging;
using System.Linq;
using PictureBot.Models;
using PictureBot.Responses;
using Microsoft.Bot.Builder.AI.Luis;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using System;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.PictureBot;

You've just added access to your Models/Responses, as well as to the services LUIS and
Azure Search. Finally, the Newtonsoft references will help you parse the responses from
LUIS, which we will see in a subsequent lab.

Next, we'll need to override the OnTurnAsync method with one that processes incoming
messages and then routes them through the various dialogs.

2. Replace the PictureBot class with the following:

Paste Content 

/// <summary>
/// Represents a bot that processes incoming activities.
/// For each user interaction, an instance of this class is created and the
OnTurnAsync method is called.
/// This is a Transient lifetime service. Transient lifetime services are created
/// each time they're requested. For each Activity received, a new instance of this
/// class is created. Objects that are expensive to construct, or have a lifetime
/// beyond the single turn, should be carefully managed.
/// For example, the <see cref="MemoryStorage"/> object and associated
/// <see cref="IStatePropertyAccessor{T}"/> object are created with a singleton
lifetime.
/// </summary>
/// <seealso cref="https://docs.microsoft.com/en-
us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-2.1"/>
/// <summary>Contains the set of dialogs and prompts for the picture bot.</summary>
public class PictureBot : ActivityHandler
{
private readonly PictureBotAccessors _accessors;
// Initialize LUIS Recognizer

private readonly ILogger _logger;


private DialogSet _dialogs;

/// <summary>
/// Every conversation turn for our PictureBot will call this method.
/// There are no dialogs used, since it's "single turn" processing, meaning a
single
/// request and response. Later, when we add Dialogs, we'll have to navigate
through this method.
/// </summary>
/// <param name="turnContext">A <see cref="ITurnContext"/> containing all the
data needed
/// for processing this conversation turn. </param>
/// <param name="cancellationToken">(Optional) A <see cref="CancellationToken"/>
that can be used by other objects
/// or threads to receive notice of cancellation.</param>
/// <returns>A <see cref="Task"/> that represents the work queued to
execute.</returns>
/// <seealso cref="BotStateSet"/>
/// <seealso cref="ConversationState"/>
/// <seealso cref="IMiddleware"/>
public override async Task OnTurnAsync(ITurnContext turnContext,
CancellationToken cancellationToken = default(CancellationToken))
{
if (turnContext.Activity.Type is "message")
{
// Establish dialog context from the conversation state.
var dc = await _dialogs.CreateContextAsync(turnContext);
// Continue any current dialog.
var results = await dc.ContinueDialogAsync(cancellationToken);

// Every turn sends a response, so if no response was sent,


// then there no dialog is currently active.
if (!turnContext.Responded)
{
// Start the main dialog
await dc.BeginDialogAsync("mainDialog", null, cancellationToken);
}
}
}
/// <summary>
/// Initializes a new instance of the <see cref="PictureBot"/> class.
/// </summary>
/// <param name="accessors">A class containing <see
cref="IStatePropertyAccessor{T}"/> used to manage state.</param>
/// <param name="loggerFactory">A <see cref="ILoggerFactory"/> that is hooked to
the Azure App Service provider.</param>
/// <seealso cref="https://docs.microsoft.com/en-
us/aspnet/core/fundamentals/logging/?view=aspnetcore-2.1#windows-eventlog-provider"/>
public PictureBot(PictureBotAccessors accessors, ILoggerFactory loggerFactory /*,
LuisRecognizer recognizer*/)
{
if (loggerFactory == null)
{
throw new System.ArgumentNullException(nameof(loggerFactory));
}

// Add instance of LUIS Recognizer

_logger = loggerFactory.CreateLogger<PictureBot>();
_logger.LogTrace("PictureBot turn start.");
_accessors = accessors ?? throw new
System.ArgumentNullException(nameof(accessors));

// The DialogSet needs a DialogState accessor, it will call it when it has a


turn context.
_dialogs = new DialogSet(_accessors.DialogStateAccessor);
// This array defines how the Waterfall will execute.
// We can define the different dialogs and their steps here
// allowing for overlap as needed. In this case, it's fairly simple
// but in more complex scenarios, you may want to separate out the different
// dialogs into different files.
var main_waterfallsteps = new WaterfallStep[]
{
GreetingAsync,
MainMenuAsync,
};
var search_waterfallsteps = new WaterfallStep[]
{
// Add SearchDialog water fall steps

};

// Add named dialogs to the DialogSet. These names are saved in the dialog
state.
_dialogs.Add(new WaterfallDialog("mainDialog", main_waterfallsteps));
_dialogs.Add(new WaterfallDialog("searchDialog", search_waterfallsteps));
// The following line allows us to use a prompt within the dialogs
_dialogs.Add(new TextPrompt("searchPrompt"));
}
// Add MainDialog-related tasks

// Add SearchDialog-related tasks

// Add search related tasks

Spend some time reviewing and discussing this shell with a fellow workshop participant. You
should understand the purpose of each line before continuing.

We'll add some more to this in a bit. You can ignore any errors for now.
25. Lab 3: Creating a Basic filtering bot
Responses:

So before we fill out our dialogs, we need to have some responses ready. Remember, we're
going to keep dialogs and responses separate, because it results in cleaner code, and an
easier way to follow the logic of the dialogs. If you don't agree or understand now, you will
soon.

1. In the Responses folder, create two classes,


called MainResponses.cs and SearchResponses.cs. As you may have figured out, the
Responses files will simply contain the different outputs we may want to send to users, no logic.

2. Within MainResponses.cs add the following:

Paste Content 

using System.Threading.Tasks;
using Microsoft.Bot.Builder;

namespace PictureBot.Responses
{
public class MainResponses
{
public static async Task ReplyWithGreeting(ITurnContext context)
{
// Add a greeting
}
public static async Task ReplyWithHelp(ITurnContext context)
{
await context.SendActivityAsync($"I can search for pictures, share
pictures and order prints of pictures.");
}
public static async Task ReplyWithResumeTopic(ITurnContext context)
{
await context.SendActivityAsync($"What can I do for you?");
}
public static async Task ReplyWithConfused(ITurnContext context)
{
// Add a response for the user if Regex or LUIS doesn't know
// What the user is trying to communicate
await context.SendActivityAsync($"I'm not understand you.");
}
public static async Task ReplyWithLuisScore(ITurnContext context, string key,
double score)
{
await context.SendActivityAsync($"Intent: {key} ({score}).");
}
public static async Task ReplyWithShareConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Posting your picture(s) on
twitter...");
}
public static async Task ReplyWithSearchConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Searching your picture(s) on
twitter...");
}
public static async Task ReplyWithOrderConfirmation(ITurnContext context)
{
await context.SendActivityAsync($"Ordering standard prints of your
picture(s)...");
}
}
}
Note that there are two responses with no values (ReplyWithGreeting and
ReplyWithConfused). Fill these in as you see fit.

3. Within "SearchResponses.cs" add the following:

Paste Content 

using Microsoft.Bot.Builder;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Bot.Schema;

namespace PictureBot.Responses
{
public class SearchResponses
{
// add a task called "ReplyWithSearchRequest"
// it should take in the context and ask the
// user what they want to search for
public static async Task ReplyWithSearchConfirmation(ITurnContext context,
string utterance)
{
await context.SendActivityAsync($"Ok, searching for pictures of
{utterance}");
}
public static async Task ReplyWithNoResults(ITurnContext context, string
utterance)
{
await context.SendActivityAsync("There were no results found for \"" +
utterance + "\".");
}
}
}
4. Notice a whole task is missing. Fill in as you see fit, but make sure the new task has the name
"ReplyWithSearchRequest", or you may have issues later.

26. Lab 3: Creating a Basic filtering bot


Models:

Due to time limitations, we will not be walking through creating all the models. They are
straightforward, but we recommend taking some time to review the code within after you've
added them.

1. Right-click on the Models folder and select Add>Existing Item.

2. Navigate to {GitHubDir}\Lab3-Basic_Filter_Bot\code\Models, select all three files, and


select Add.

27. Lab 3: Creating a Basic filtering bot


Lab 3.4:  Regex and Middleware

There are a number of things that we can do to improve our bot. First of all, we may not
want to call LUIS for a simple "search pictures" message, which the bot will get fairly
frequently from its users. A simple regular expression could match this, and save us time
(due to network latency) and money (due to cost of calling the LUIS service).

Also, as the complexity of our bot grows and we are taking the user's input and using
multiple services to interpret it, we need a process to manage that flow. For example, try
regular expressions first, and if that doesn't match, call LUIS, and then perhaps we also drop
down to try other services like QnA Maker(http://qnamaker.ai) or Azure Search. A great way
to manage this is through Middleware (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-concept-middleware?view=azure-bot-service-4.0 ), and the SDK does a
great job supporting that.

Before continuing with the lab, learn more about middleware and the Bot Framework SDK:
 Overview and Architecture (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-basics?view=azure-bot-service-4.0 )
 Middleware (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-
concept-middleware?view=azure-bot-service-4.0 )
 Creating Middleware (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-create-middleware?view=azure-bot-service-4.0&tabs=csaddmiddleware
%2Ccsetagoverwrite%2Ccsmiddlewareshortcircuit%2Ccsfallback
%2Ccsactivityhandler)
Ultimately, we'll use some middleware to try to understand what users are saying with
regular expressions (Regex) first, and if we can't, we'll call LUIS. If we still can't, then we'll
drop down to a generic "I'm not sure what you mean" response, or whatever you put for
"ReplyWithConfused."

1. In Startup.cs, below the "Add Regex below" comment within ConfigureServices, add the
following:

Paste Content 

middleware.Add(new RegExpRecognizerMiddleware()
.AddIntent("search", new Regex("search picture(?:s)*(.*)|search pic(?:s)*(.*)",
RegexOptions.IgnoreCase))
.AddIntent("share", new Regex("share picture(?:s)*(.*)|share pic(?:s)*(.*)",
RegexOptions.IgnoreCase))
.AddIntent("order", new Regex("order picture(?:s)*(.*)|order print(?:s)*(.*)|order
pic(?:s)*(.*)", RegexOptions.IgnoreCase))
.AddIntent("help", new Regex("help(.*)", RegexOptions.IgnoreCase)));

We're really just skimming the surface of using regular expressions. If you're interested, you
can learn more here (https://docs.microsoft.com/en-us/dotnet/standard/base-
types/regular-expression-language-quick-reference ).

2. You may notice that the options.State has been deprecated. Let's migrate to the newest
method:

Remove the following code:


Paste Content 

var conversationState = new ConversationState(dataStore);


options.State.Add(conversationState);

3. Replace it with

Paste Content 

var userState = new UserState(dataStore);


var conversationState = new ConversationState(dataStore);

// Create the User state.


services.AddSingleton<UserState>(userState);

// Create the Conversation state.


services.AddSingleton<ConversationState>(conversationState);

3. Also replace the Configure code to now pull from the dependency injection version:

Paste Content 

var conversationState = options.State.OfType<ConversationState>().FirstOrDefault();


if (conversationState == null)
{
throw new InvalidOperationException("ConversationState must be defined and added
before adding conversation-scoped state accessors.");
}

4. Replace it with

Paste Content 

var conversationState =
services.BuildServiceProvider().GetService<ConversationState>();
if (conversationState == null)
{
throw new InvalidOperationException("ConversationState must be defined and added
before adding conversation-scoped state accessors.");
}

Without adding LUIS, our bot is really only going to pick up on a few variations, but it
should capture a good bit of messages, if the users are using the bot for searching and
sharing and ordering pictures.

Aside: One might argue that the user shouldn't have to type "help" to get a menu of clear
options on what the bot can do; rather, this should be the default experience on first
contact with the bot. Discoverability is one of the biggest challenges for bots - letting the
users know what the bot is capable of doing. Good bot design principles
(https://docs.microsoft.com/en-us/bot-framework/bot-design-principles ) can help.

28. Lab 3: Creating a Basic filtering bot


Lab 3.5:  Running the bot MainDialog, Again

Let's get down to business. We need to fill out MainDialog within PictureBot.cs so that our
bot can react to what users say they want to do. Based on our results from Regex, we need
to direct the conversation in the right direction. Read the code carefully to confirm you
understand what it's doing.

1. In PictureBot.cs, paste in the following method code:

Paste Content 

// If we haven't greeted a user yet, we want to do that first, but for the rest of
the
// conversation we want to remember that we've already greeted them.
private async Task<DialogTurnResult> GreetingAsync(WaterfallStepContext stepContext,
CancellationToken cancellationToken)
{
// Get the state for the current step in the conversation
var state = await _accessors.PictureState.GetAsync(stepContext.Context, () => new
PictureState());

// If we haven't greeted the user


if (state.Greeted == "not greeted")
{
// Greet the user
await MainResponses.ReplyWithGreeting(stepContext.Context);
// Update the GreetedState to greeted
state.Greeted = "greeted";
// Save the new greeted state into the conversation state
// This is to ensure in future turns we do not greet the user again
await _accessors.ConversationState.SaveChangesAsync(stepContext.Context);
// Ask the user what they want to do next
await MainResponses.ReplyWithHelp(stepContext.Context);
// Since we aren't explicitly prompting the user in this step, we'll end the
dialog
// When the user replies, since state is maintained, the else clause will
move them
// to the next waterfall step
return await stepContext.EndDialogAsync();
}
else // We've already greeted the user
{
// Move to the next waterfall step, which is MainMenuAsync
return await stepContext.NextAsync();
}

// This step routes the user to different dialogs


// In this case, there's only one other dialog, so it is more simple,
// but in more complex scenarios you can go off to other dialogs in a similar
public async Task<DialogTurnResult> MainMenuAsync(WaterfallStepContext stepContext,
CancellationToken cancellationToken)
{
// Check if we are currently processing a user's search
var state = await _accessors.PictureState.GetAsync(stepContext.Context);

// If Regex picks up on anything, store it


var recognizedIntents = stepContext.Context.TurnState.Get<IRecognizedIntents>();
// Based on the recognized intent, direct the conversation
switch (recognizedIntents.TopIntent?.Name)
{
case "search":
// switch to the search dialog
return await stepContext.BeginDialogAsync("searchDialog", null,
cancellationToken);
case "share":
// respond that you're sharing the photo
await MainResponses.ReplyWithShareConfirmation(stepContext.Context);
return await stepContext.EndDialogAsync();
case "order":
// respond that you're ordering
await MainResponses.ReplyWithOrderConfirmation(stepContext.Context);
return await stepContext.EndDialogAsync();
case "help":
// show help
await MainResponses.ReplyWithHelp(stepContext.Context);
return await stepContext.EndDialogAsync();
default:
{
await MainResponses.ReplyWithConfused(stepContext.Context);
return await stepContext.EndDialogAsync();
}
}
}
2. Press F5 to run the bot.

3. Using the bot emulator, test the bot by sending some commands:

 help
 share pics
 order pics
 search pics

Note If you get a 500 error in your bot, you can place a break point in the Startup.cs file
inside the OnTurnError delegate method. The most common error is a mismatch of the
AppId and AppSecret.

4. If the only thing that didn't give you the expected result was "search pics", everything is working
how you configured it. "search pics" failing is the expected behavior at this point in the lab, but
why? Have an answer before you move on!

Hint: Use break points to trace matching to case "search", starting from PictureBot.cs.

Get stuck or broken? You can find the solution for the lab up until this point under
resources/code/Finished(./code/Finished). The readme file within the solution (once you
open it) will tell you what keys you need to add in order to run the solution. We recommend
using this as a reference, not as a solution to run, but if you choose to run it, be sure to add
the necessary keys for your enviroment.

Resources
 Bot Builder Basics (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-basics?view=azure-bot-service-4.0&tabs=cs)
 .NET Bot Builder SDK tutorial (https://docs.microsoft.com/en-us/azure/bot-
service/dotnet/bot-builder-dotnet-sdk-quickstart?view=azure-bot-service-4.0)
 Bot Service Documentation (https://docs.microsoft.com/en-us/azure/bot-
service/bot-service-overview-introduction?view=azure-bot-service-4.0)
 Deploy your bot (https://docs.microsoft.com/en-us/azure/bot-service/bot-
builder-deploy-az-cli?view=azure-bot-service-4.0&tabs=newrg)

29. Lab 4: Log Chats


Introduction:

This workshop demonstrates how you can perform logging using Microsoft Bot Framework
and store aspects of chat conversations. After completing these labs, you should be able to:

 Understand how to intercept and log message activities between bots and users
 Log utterances to file storage

Prerequisites

This lab starts from the assumption that you have built and published the bot from Lab 3.

It is recommended that you do that lab in order to be able to implement logging as covered
in this lab. If you have not, reading carefully through all the exercises and looking at some of
the code or using it in your own applications may be sufficient, depending on your needs

30. Lab 4: Log Chats


Lab 4.0:  Intercepting and analyzing messages

In this lab, we'll look at some different ways that the Bot Framework allows us to intercept
and log data from conversations that the bot has with users. To start we will utilize the In
Memory storage, this is good for testing purposes, but not ideal for production
environments.

Afterwards, we'll look at a very simple implementation of how we can write data from
conversations to files in Azure. Specifically, we'll put messages users send to the bot in a list,
and store the list, along with a few other items, in a temporary file (though you could
change this to a specific file path as needed)

31. Lab 4: Log Chats


Using the Bot Framework Emulator:

Let's take a look and what information we can glean, for testing purposes, without adding
anything to our bot.
1. Open your PictureBot.sln in Visual Studio.

Note You can use the Starter solution if you did not do Lab 3.

2. Press F5 to run your bot

3. Open the bot in the Bot Framework Emulator and have a quick conversastion:

4. Review the bot emulator debug area, notice the following:

 If you click on a message, you are able to see its associated JSON with the
"Inspector-JSON" tool on the right. Click on a message and inspect the JSON to see
what information you can obtain.

 The "Log" in the bottom right-hand corner, contains a complete log of the
conversation. Let's dive into that a little deeper.

o The first thing you'll see is the port the Emulator is listening on
o You'll also see where ngrok is listening, and you can inspect the traffic to
ngrok using the "ngrok traffic inspector" link. However, you should notice that
we will bypass ngrok if we're hitting local addresses. ngrok is included here
informationally, as remote testing is not covered in this workshop
o If there is an error in the call (anything other than POST 200 or POST 201
reponse), you'll be able to click it and see a very detailed log in the
"Inspector-JSON". Depending on what the error is, you may even get a stack
trace going through the code and attempting to point out where the error
has occurred. This is greatly useful when you're debugging your bot projects.

o You can also see that there is a Luis Trace when we make calls out to LUIS. If
you click on the trace link, you're able to see the LUIS information. You may
notice that this is not set up in this particular lab.

You can read more about testing, debugging, and logging with the emulator here
(https://docs.microsoft.com/en-us/azure/bot-service/bot-service-debug-emulator?
view=azure-bot-service-4.0).
32. Lab 4: Log Chats
Lab 4.1:  Logging to Azure Storage

The default bot storage provider uses in-memory storage that gets disposed of when the
bot is restarted. This is good for testing purposes only. If you want to persist data but do
not want to hook your bot up to a database, you can use the Azure storage provider or
build your own provider using the SDK.

1. Open the Startup.cs file. Since we want to use this process for every message, we'll use
the ConfigureServices method in our Startup class to add storing information to an Azure Blob
file. Notice that currently we're using:

Paste Content 

IStorage dataStore = new MemoryStorage();

As you can see, our current implementation is using in-memory storage. Again, this memory
storage is recommended for local bot debugging only. When the bot is restarted, anything
stored in memory will be gone.

2. Replace the current IStorage line with the following to change it to a FileStorage based


persistance:

Paste Content 

var blobConnectionString =
Configuration.GetSection("BlobStorageConnectionString")?.Value;
var blobContainer = Configuration.GetSection("BlobStorageContainer")?.Value;
IStorage dataStore = new
Microsoft.Bot.Builder.Azure.AzureBlobStorage(blobConnectionString, blobContainer);

3. Switch to the Azure Portal, navigate to your blob storage account

4. From the Overview tab, click Containers


5. Check if a chatlog container exists, if it does not click +Container:
o For the name, type chatlog, click OK

6. If you haven't already done so, click Access keys and record your connection string

7. Open the appsettings.json and add your blob connection string details:

Paste Content 

"BlobStorageConnectionString": "",
"BlobStorageContainer" : "chatlog"

8. Press F5 to run the bot.

9. In the emulator, go through a sample conversation with the bot.

Note If you dont get a reply back, check your Azure Storage Account connection string

10. Switch to the Azure Portal, navigate to your blob storage account

11. Click Containers, then open the ChatLog container

12. Select the chat log file, it should start with emulator..., then select Edit. What do you see in the
files? What don't you see that you were expecting/hoping to see?

You should see something similar to this:


Paste Content 

{"$type":"System.Collections.Concurrent.ConcurrentDictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Collections.Concurrent","DialogState":
{"$type":"Microsoft.Bot.Builder.Dialogs.DialogState,
Microsoft.Bot.Builder.Dialogs","DialogStack":
{"$type":"System.Collections.Generic.List`1[[Microsoft.Bot.Builder.Dialogs.DialogInst
ance, Microsoft.Bot.Builder.Dialogs]], System.Private.CoreLib","$values":
[{"$type":"Microsoft.Bot.Builder.Dialogs.DialogInstance,
Microsoft.Bot.Builder.Dialogs","Id":"mainDialog","State":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib","options":null,"values":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib"},"instanceId":"f80db88d-cdea-4b47-a3f6-
a5bfa26ed60b","stepIndex":0}}]}},"PictureBotAccessors.PictureState":
{"$type":"Microsoft.PictureBot.PictureState,
PictureBot","Greeted":"greeted","Search":"","Searching":"no"}}

33. Lab 4: Log Chats


Lab 4.2:  Logging utterances to a file

For the purposes of this lab, we are going to focus on the actual utterances that users are
sending to the bot. This could be useful to determine what types of conversations and
actions users are trying to complete with the bot.

We can do this by updating what we're storing in our UserData object in


the PictureState.cs file and by adding information to the object in PictureBot.cs:

1. Open PictureState.cs

2. after the following code:

Paste Content 

public class PictureState


{
public string Greeted { get; set; } = "not greeted";
add:
Paste Content 

// A list of things that users have said to the bot


public List<string> UtteranceList { get; private set; } = new List<string>();

In the above, we're simple creating a list where we'll store the list of messages that users
send to the bot.

In this example we're choosing to use the state manager to read and write data, but you
could alternatively read and write directly from storage without using state manager
(https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-v4-storage?
view=azure-bot-service-4.0&tabs=csharpechorproperty%2Ccsetagoverwrite%2Ccsetag ).

If you choose to write directly to storage, you could set up eTags depending on your
scenario. By setting the eTag property to *, you could allow other instances of the bot to
overwrite previously written data, meaning that the last writer wins. We won't get into it
here, but you can read more about managing concurrency (https://docs.microsoft.com/en-
us/azure/bot-service/bot-builder-howto-v4-storage?view=azure-bot-service-
4.0&tabs=csharpechorproperty%2Ccsetagoverwrite%2Ccsetag#manage-concurrency-using-
etags).

The final thing we have to do before we run the bot is add messages to our list with
our OnTurn action.

3. In PictureBot.cs, after the following code:

Paste Content 

public override async Task OnTurnAsync(ITurnContext turnContext, CancellationToken


cancellationToken = default(CancellationToken))
{
if (turnContext.Activity.Type is "message")
{

add:
Paste Content 

var utterance = turnContext.Activity.Text;


var state = await _accessors.PictureState.GetAsync(turnContext, () => new
PictureState());
state.UtteranceList.Add(utterance);
await _accessors.ConversationState.SaveChangesAsync(turnContext);

Note We have to save the state if we modify it


The first line takes the incoming message from a user and stores it in a variable
called utterance. The next line adds the utterance to the existing list that we created in
PictureState.cs.

4. Press F5 to run the bot.

5. Have another conversation with the bot. Stop the bot and check the latest blob persisted log file.
What do we have now?

Paste Content 

{"$type":"System.Collections.Concurrent.ConcurrentDictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Collections.Concurrent","DialogState":
{"$type":"Microsoft.Bot.Builder.Dialogs.DialogState,
Microsoft.Bot.Builder.Dialogs","DialogStack":
{"$type":"System.Collections.Generic.List`1[[Microsoft.Bot.Builder.Dialogs.DialogInst
ance, Microsoft.Bot.Builder.Dialogs]], System.Private.CoreLib","$values":
[{"$type":"Microsoft.Bot.Builder.Dialogs.DialogInstance,
Microsoft.Bot.Builder.Dialogs","Id":"mainDialog","State":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib","options":null,"values":
{"$type":"System.Collections.Generic.Dictionary`2[[System.String,
System.Private.CoreLib],[System.Object, System.Private.CoreLib]],
System.Private.CoreLib"},"instanceId":"f80db88d-cdea-4b47-a3f6-
a5bfa26ed60b","stepIndex":0}}]}},"PictureBotAccessors.PictureState":
{"$type":"Microsoft.PictureBot.PictureState,
PictureBot","Greeted":"greeted","UtteranceList":
{"$type":"System.Collections.Generic.List`1[[System.String, System.Private.CoreLib]],
System.Private.CoreLib","$values":["help"]},"Search":"","Searching":"no"}}

Get stuck or broken? You can find the solution for the lab up until this point under
/code/PictureBot-FinishedSolution-File(./code/PictureBot-FinishedSolution-File ). You will
need to insert the keys for your Azure Bot Service and your Azure Storage settings in
the appsettings.json file. We recommend using this code as a reference, not as a solution
to run, but if you choose to run it, be sure to add the necessary keys.

Going further

To incorporate database storage and testing into your logging solution, we recommend the
following self-led tutorials that build on this solution : Storing Data in
Cosmos(https://github.com/Azure/LearnAI-Bootcamp/blob/master/lab02.5-
logging_chat_conversations/3_Cosmos.md ).

34. Lab 5: Creating a Customized QnA Maker Bot


Introduction:

In this lab we will explore the QnA Maker for creating bots that connect to a pre-trained
knowledgebase. Using QnAMaker, you can upload documents or point to web pages and
have it pre-populate a knowledge base that can feed a simple bot for common uses such as
frequently asked questions.

35. Lab 5: Creating a Customized QnA Maker Bot


Lab 5.1:  QnA Maker Setup

1. Open the Azure Portal (https://portal.azure.com)

2. Click Create a new resource

3. Type QnA Maker, select QnA Maker

4. Select Create
5. Ensure you lab subscription and resource groups are selected

6. Provide a Name for the service, such as ai100QnAgo replacing go with your own initials to


create a unique name.

7. Select the Standard S0 tier for the resource pricing tier. We aren't using the free tier because
we will upload files that are larger than 1MB later.

8. Select the same location all our resources have been placed in and then for the search pricing tier,
select the Free F tier

9. The App name field should populate with the same value you used for Name in the previous
steps. If not, enter an app name, it must be unique

10. Disable App insights.

11. Select Review + Create and then Create. This will create the following resources in your
resource group:

 App Service Plan


 App Service
 Application Insights
 Search Service
 Cognitive Service instance of type QnAMaker

36. Lab 5: Creating a Customized QnA Maker Bot


Lab 5.2:  Create a KnowledgeBase
1. Open the QnA Maker site (https://qnamaker.ai)

2. In the top right, click Sign in. Login using the Azure credentials for your new QnA Maker
resource

3. In the top navigation area, click Create a knowledge base

4. Skip Step 1 as you have already created the resource

5. Select your Azure AD and Subscription tied to your QnA Maker resource, then select your newly
created QnA Maker resource

6. For the name, type Microsoft FAQs*

7. For the file, click Add file, browse to the code/surface-pro-4-user-guide-EN.pdf file

8. For the file, click Add file, browse to the code/Manage Azure Blob Storage file

Note You can find out more about the supported file types and data sources here
(https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/data-
sources-supported)

9. For the Chit-chat, select Witty


10. Click Create your KB

37. Lab 5: Creating a Customized QnA Maker Bot


Lab 5.3:  Publish and Test your Knowledge base

1. Review the knowledge base QnA pairs, you should see ~200 different pairs based on the two
documents we fed it

2. In the top menu, click PUBLISH.

3. On the publish page, click Publish. Once the service is published, click the Create Bot button on
the success page

4. If prompted, login as the account tied to your lab resource group.

5. On the bot service creation page, fix any naming errors, then click Create.

Note Recent change in Azure requires dashes ("-") be removed from some resource names

6. Once the bot resource is created, navigate to the new Web App Bot, then click Test in Web Chat

7. Ask the bot any questions related to a Surface Pro 4 or managing Azure Blog Storage:

 How do I add memory?


 How long does it take to charge the battery?
 How do I hard reset?
 What is a blob?

8. Ask it some questions it doesn't know, such as:

 How do I bowl a strike?

38. Lab 5: Creating a Customized QnA Maker Bot


Lab 5.4:  Download the Bot Source code

1. Under Bot management, select the Build tab

2. Click Download Bot source code, when prompted click Yes.

3. Azure will build your source code, when complete, click Download Bot source code, if
prompted, select Yes

4. Extract the zip file to your local computer

5. Open the solution file, Visual Studio will open

6. Open the Startup.cs file, you will notice nothing special has been added here

7. Open the Bots/{BotName}.cs file and review the code that was generated for this file
8. Open the BotService.cs file and review the code that was generated for this file.

Note: The previous steps for reviewing the generated code are correct for the solution that
was downloaded during the authoring of this lab. The solution files may change from time
to time based on updates to the frameworks and SDKs so the solution files may differ. The
key aspect here is just have you review the code that is automatically generated from the
tool.

Going Further
What would you do to integrate the QnAMaker code into your picture bot?

Resources
 What is the QnA Maker service? (https://docs.microsoft.com/en-us/azure/cognitive-
services/qnamaker/overview/overview)

39. Lab 6: Implementing the LUIS model


Introduction:

In this lab, you will build, train and publish a LUIS model to help your bot (which will be
created in a future lab) communicate effectively with human users.

Note In this lab, we will only be creating the LUIS model that you will use in a future lab to
build a more intelligent bot.

The LUIS functionality and functionality has already been covered in the workshop; for a
refresher on LUIS, read more(https://docs.microsoft.com/en-us/azure/cognitive-
services/LUIS/Home).

Now that we know what LUIS is, we'll want to plan our LUIS app. We already created a basic
bot ("PictureBot") that responds to messages containing certain text. We will need to create
intents that trigger the different actions that our bot can do, and create entities that require
different actions. For example, an intent for our PictureBot may be "OrderPic" and it triggers
the bot to provide an appropriate response.

For example, in the case of Search (not implemented here), our PictureBot intent may be
"SearchPic" and it triggers Azure Search service to look for photos, which requires a "facet"
entity to know what to search for. You can see more examples for planning your app
here(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/plan-your-app ).

Once we've thought out our app, we are ready to build and train
it(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-get-started-
create-app).

As a review, these are the steps you will generally take when creating LUIS applications:

1. Add intents (https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/add-


intents)

2. Add utterances (https://docs.microsoft.com/en-us/azure/cognitive-


services/LUIS/add-example-utterances)

3. Add entities (https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/add-


entities)

4. Improve performance using phrase lists (https://docs.microsoft.com/en-


us/azure/cognitive-services/LUIS/add-features) and patterns
(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-
model-intent-pattern)

5. Train and test (https://docs.microsoft.com/en-us/azure/cognitive-


services/LUIS/train-test)

6. Review endpoint utterances (https://docs.microsoft.com/en-us/azure/cognitive-


services/LUIS/label-suggested-utterances)

7. Publish (https://docs.microsoft.com/en-us/azure/cognitive-
services/LUIS/publishapp)
40. Lab 6: Implementing the LUIS model
Lab 6.0:  Creating the LUIS service in the portal (optional)

Creating a LUIS service in the portal is optional, as LUIS provides you with a "starter key"
that you can use for the labs. However, if you want to see how to create a free or paid
service in the portal, you can follow the steps below.
Note If you ran the pre-req ARM Template, you will already have a cognitive services
resource that included the Language Understanding APIs.

1. Open the Azure Portal (https://portal.azure.com)

2. Click Create a resource

3. Enter Language Understanding in the search box and choose Language


Understanding

4. Click Create

5. Select your Subscription and Resource group

6. For the name, type {YOURINIT}luisbot

7. For the Authoring Resource location, select a location closest to you. Not all locations are
available for this resource.
8. For the pricing tier, select Free F0

9. For your Prediction Resource location, set the option that is the same as your resource
group location

10. For the Prediction pricingtier, select Free F0

11. Select Review + Create and then Create

Note The Luis AI web site does not allow you to control or publish your Azure based
cognitive services resources. You will need to call the APIs in order to train and publish
them.

41. Lab 6: Implementing the LUIS model


Lab 6.1:  Adding intelligence to your applications with LUIS

Let's look at how we can use LUIS to add some natural language capabilities. LUIS allows
you to map natural language utterances (words/phrases/sentences the user might say when
talking to the bot) to intents (tasks or actions the user wants to perform). For our
application, we might have several intents: finding pictures, sharing pictures, and ordering
prints of pictures, for example. We can give a few example utterances as ways to ask for
each of these things, and LUIS will map additional new utterances to each intent based on
what it has learned.

Warning: Though Azure services use IE as the default browser, we do not recommend it for
LUIS. You should be able to use Chrome or Firefox for all of the labs. Alternatively, you can
download either Microsoft Edge (https://www.microsoft.com/en-us/download/details.aspx?
id=48126) or Google Chrome (https://www.google.com/intl/en/chrome/ ).

1. Navigate to https://www.luis.ai (unless you are located in Europe or Australia).


We will create a new LUIS app to support our bot.
Note If you created a key in a Europe region, you will need to create your application
at https://eu.luis.ai/. If you created a key in an Australia region, you will need to create
your application at https://au.luis.ai/. You can read more about the LUIS publishing
regions here (https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-
reference-regions).

2. Sign in using your Microsoft account. This should be the same account that you used to create
the LUIS key in the previous section.

3. Click Create a LUIS app now. You should be redirected to a list of your LUIS applications. If
prompted, click Migrate Layer.

4. If this is your first time, you will be asked to agree with service terms of use and select your
county.

 On next step you need to chose recommended option and link your Azure account with LUIS.
 Finally confirm your settings and you will be forwarded to the LUIS App page.

Note: Notice that there is also an "Import App" next to the "New App" button on the
current page (https://www.luis.ai/applications). After creating your LUIS application, you
have the ability to export the entire app as JSON and check it into source control. This is a
recommended best practice, so you can version your LUIS models as you version your code.
An exported LUIS app may be re-imported using that "Import App" button. If you fall
behind during the lab and want to cheat, you can click the "Import App" button and import
the LUIS model.

5. From the main page, select the + New app button


6. Type a name for your app using similar convention used in the labs of the course.

7. Select a Culture and your Prediction resource

8. Select Done. Close the "How to create an effective LUIS app" dialog.

9. In the top navigation, select the BUILD link. Notice there is one intent called "None". Random
utterances that don't map to any of your intents may be mapped to "None".

We want our bot to be able to do the following things:

 Search/find pictures
 Share pictures on social media
 Order prints of pictures
 Greet the user (although this can also be done other ways, as we will see later)

Let's create intents for the user requesting each of these.

10. Select the + Create button.

11. Name the first intent Greeting and click Done.

12. Give several examples of things the user might say when greeting the bot, pressing "Enter" after
each one.

Let's see how to create an entity. When the user requests to search the pictures, they may
specify what they are looking for. Let's capture that in an entity.
13. Click on Entities in the left-hand column and then click + Create.

14. Give it an entity name facet

15. For the entity type select Machine learned.

16. Click Create.

17. Click Intents in the left-hand sidebar and then click the + Create button.

18. Give it an intent name of SearchPic and then click Done.

Just as we did for Greetings, let's add some sample utterances (words/phrases/sentences
the user might say when talking to the bot). People might search for pictures in many ways.
Feel free to use some of the utterances below, and add your own wording for how you
would ask a bot to search for pictures.

 Find outdoor pics


 Are there pictures of a train?
 Find pictures of food.
 Search for photos of kids playing
 Show me beach pics
 Find dog photos
 Show me pictures of men wearing glasses
 Show me happy baby pics

Once we have some utterances, we have to teach LUIS how to pick out the search topic as
the "facet" entity. Whatever the "facet" entity picks up is what will be searched.
19. Hover and click the word (or click consecutive words to select a group of words) and then select
the "facet" entity.

So your utterances may become something like this when facets are labeled:
Note This workshop does not include Azure search, however, this functionality has been left
in for the sake of demonstration.

20. Click Intents in the left sidebar and add two more intents:

 Name one intent "SharePic". This might be identified by utterances like:


 Share this pic
 Can you tweet that?
 Post to Twitter
 Create another intent named "OrderPic". This could be communicated with
utterances like:
 Print this picture
 I would like to order prints
 Can I get an 8x10 of that one
 Order wallets
When choosing utterances, it can be helpful to use a combination of questions, commands,
and "I would like to..." formats.

21. Finally add some sample utterances to the "None" intent. This helps LUIS label when things are
outside the scope of your application. Add things like "I'm hungry for pizza", "Search videos", etc.
You should have about 10-15% of your app's utterances within the None intent.

42. Lab 6: Implementing the LUIS model


Lab 6.2:  Training the LUIS model
We're now ready to train our model. In this exercise, you will perform a simple training
operation in order to test your model. The testing will take place using the built-in testing
panel in the LUIS portal.
1. In the top toolbar, select Train. During training, LUIS builds a model to map utterances to intents
based on the training data you’ve provided.

TIP: Training is not always immediate. Sometimes it gets queued and can take several
minutes.

Create a public endpoint for the LUIS service

2. After training is finished, select Manage in the top toolbar. The following options will appear on
the left toolbar:

NOTE: The categories on the left pane may change as the portals are updated. As a result,
the keys and endpoints may fall under a different category than the one listed here.

 Settings
 Publish settings
 Azure Resources
 Versions
 Collaborators

NOTE: An endpoint named Starter_Key is automatically created for testing purposes, and


you could use that here - however to use the service in a production environment or inside
of an application, you will always want to tie it to a real Language Understanding resource
created in Azure.

3. You should see a Prediction Resource and a Key resource already created. If you see
the Prediction Resource, advance to the next section on Publish the app.

4. If you do not see an existing Prediction Resource, select Add prediction resource.


The Tenant will already be selected.

5. Select your subscription, and the resource you created in the Azure portal earlier and then
select Done to connect the Language Understanding resource to the LUIS service.
Publish the app

6. In the top toolbar, select Publish.

NOTE: You can publish to your Production or Staging endpoint. Select Production, and


read about the reasons for the two endpoints.

7. Under Choose your publishing slot and settings, select Production Slot and then


select Done.

Publishing creates an endpoint to call the LUIS model. The endpoint URL will be displayed.
Copy the endpoint URL and add it to your list of keys for future use.

8. In the top bar, select Test. Try typing a few utterances and see which intents are returned. Here
are some examples you can try:

 | Utterance | Result | Score meaning |


 |---------|---------|---------|
 | Show me pictures of a local beach | Returns the SearchPic intent with a score | Positive
match |
 | Hello | Returns the Greeting intent with a score | Fairly positive match |
 | Send to Tom | Returns the Utilities with a low score | Needs retraining or doesn't match any
intents |

To retrain the model for utterances with low scores, take the following steps:

9. Beside the low-scoring utterance (in this case, Send to Tom), select Inspect.

10. Beside Top-scoring intent, select Assign to new intent and then in the drop-down and
choose SharePic from the list.

11. Close the Test panel.


12. Select the Train button to retrain your model.

13. Test the Send to Tom utterance again. It should now return the SharePic intent with a higher
score.

Your LUIS app is now ready to be used by client applications, tested in a browser through
the listed endpoint, or integrated into a bot.

You can also test your published endpoint in a browser ( https://docs.microsoft.com/en-


us/azure/cognitive-services/LUIS/publishApp#test-your-published-endpoint-in-a-
browser). Copy the Endpoint URL. To open this URL in your browser, set the URL
parameter &q= to your test query. For example, append Find pictures of dogs to your URL,
and then press Enter. The browser displays the JSON response of your HTTP endpoint.

Going Further

If you still have time, spend time exploring the www.luis.ai site. Select "Prebuilt domains"
and see what is already available for you (https://docs.microsoft.com/en-
us/azure/cognitive-services/luis/luis-reference-prebuilt-domains ). You can also review
some of the other features (https://docs.microsoft.com/en-us/azure/cognitive-
services/luis/luis-concept-feature ) and patterns (https://docs.microsoft.com/en-
us/azure/cognitive-services/luis/luis-concept-patterns ), and check out the BotBuilder-
tools (https://github.com/Microsoft/botbuilder-tools ) for creating LUIS models, managing
LUIS models, simulating conversations, and more. Later, you may also be interested in
another course that includes how to design LUIS schema (https://aka.ms/daaia).

Extra Credit

If you wish to attempt to create a LUIS model including Azure Search, follow the training on
LUIS models including search (https://github.com/Azure/LearnAI-
Bootcamp/tree/master/lab01.5-luis ).
43. Lab 7: Integrate LUIS into Bot Dialogs
Introduction:

Our bot is now capable of taking in a user's input and responding based on the user's input.
Unfortunately, our bot's communication skills are brittle. One typo, or a rephrasing of words,
and the bot will not understand. This can cause frustration for the user. We can greatly
increase the bot's conversational abilities by enabling it to understand natural language with
the LUIS model we built in Lab 6

We will have to update our bot in order to use LUIS. We can do this by modifying
"Startup.cs" and "PictureBot.cs."

Prerequisites: This lab builds on Lab 3. It is recommended that you do that lab in order to
be able to implement logging as covered in this lab. If you have not, reading carefully
through all the exercises and looking at some of the code or using it in your own
applications may be sufficient, depending on your needs.

NOTE: If you intend to use the code in the Finished folder, you MUST replace the app
specific information with your own app IDs and endpoints.

44. Lab 7: Integrate LUIS into Bot Dialogs


Lab 7.1:  Adding natural language understanding

Adding LUIS to Startup.cs

1. If not already open, open your PictureBot solution in Visual Studio

NOTE You can also start with the {GitHubPath}/Lab7-


Integrate_LUIS/code/Starter/PictureBot/PictureBot.sln solution if you did not start from
Lab 1. Be sure to replace all the appsettings values

2. Open Startup.cs and locate the ConfigureServices method. We'll add LUIS here by adding an


additional service for LUIS after creating and registering the state accessors.

Below:
Paste Content 
services.AddSingleton((Func<IServiceProvider, PictureBotAccessors>)(sp =>
{
.
.
.
return accessors;
});

Add:
Paste Content 

// Create and register a LUIS recognizer.


services.AddSingleton(sp =>
{
var luisAppId = Configuration.GetSection("luisAppId")?.Value;
var luisAppKey = Configuration.GetSection("luisAppKey")?.Value;
var luisEndPoint = Configuration.GetSection("luisEndPoint")?.Value;

// Get LUIS information


var luisApp = new LuisApplication(luisAppId, luisAppKey, luisEndPoint);

// Specify LUIS options. These may vary for your bot.


var luisPredictionOptions = new LuisPredictionOptions
{
IncludeAllIntents = true,
};

// Create the recognizer


var recognizer = new LuisRecognizer(luisApp, luisPredictionOptions, true, null);
return recognizer;
});

3. Modify the appsettings.json to include the following properties, be sure to fill them in with
your LUIS instance values:

Paste Content 

"luisAppId": "",
"luisAppKey": "",
"luisEndPoint": ""

Note The Luis endpoint url for the .NET SDK should be something
like https://{region}.api.cognitive.microsoft.com with no api or version after it.

45. Lab 7: Integrate LUIS into Bot Dialogs


Lab 7.2:  Adding LUIS to PictureBot's MainDialog

1. Open PictureBot.cs.. The first thing you'll need to do is initialize the LUIS recognizer, similar to
how you did for PictureBotAccessors. Below the commented line // Initialize LUIS
Recognizer, add the following:

Paste Content 

private LuisRecognizer _recognizer { get; } = null;

2. Navigate to the PictureBot constructor:

Paste Content 

public PictureBot(PictureBotAccessors accessors, ILoggerFactory loggerFactory /*,


LuisRecognizer recognizer*/)

Now, maybe you noticed we had this commented out in your previous labs, maybe you
didn't. You have it commented out now, because up until now, you weren't calling LUIS, so a
LUIS recognizer didn't need to be an input to PictureBot. Now, we are using the recognizer.

3. Uncomment the input requirement (parameter LuisRecognizer recognizer), and add the


following line below // Add instance of LUIS Recognizer:
Paste Content 

_recognizer = recognizer ?? throw new ArgumentNullException(nameof(recognizer));

Again, this should look very similar to how we initialized the instance of _accessors.

As far as updating our MainDialog goes, there's no need for us to add anything to the


initial GreetingAsync step, because regardless of user input, we want to greet the user when
the conversation starts.

4. In MainMenuAsync, we do want to start by trying Regex, so we'll leave most of that. However, if
Regex doesn't find an intent, we want the default action to be different. That's when we want to
call LUIS.

Within the MainMenuAsync switch block, replace:


Paste Content 

default:
{
await MainResponses.ReplyWithConfused(stepContext.Context);
return await stepContext.EndDialogAsync();
}

With:
Paste Content 

default:
{
// Call LUIS recognizer
var result = await _recognizer.RecognizeAsync(stepContext.Context,
cancellationToken);
// Get the top intent from the results
var topIntent = result?.GetTopScoringIntent();
// Based on the intent, switch the conversation, similar concept as with Regex
above
switch ((topIntent != null) ? topIntent.Value.intent : null)
{
case null:
// Add app logic when there is no result.
await MainResponses.ReplyWithConfused(stepContext.Context);
break;
case "None":
await MainResponses.ReplyWithConfused(stepContext.Context);
// with each statement, we're adding the LuisScore, purely to test, so we
know whether LUIS was called or not
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "Greeting":
await MainResponses.ReplyWithGreeting(stepContext.Context);
await MainResponses.ReplyWithHelp(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "OrderPic":
await MainResponses.ReplyWithOrderConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "SharePic":
await MainResponses.ReplyWithShareConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
case "SearchPic":
await MainResponses.ReplyWithSearchConfirmation(stepContext.Context);
await MainResponses.ReplyWithLuisScore(stepContext.Context,
topIntent.Value.intent, topIntent.Value.score);
break;
default:
await MainResponses.ReplyWithConfused(stepContext.Context);
break;
}
return await stepContext.EndDialogAsync();
}

Let's briefly go through what we're doing in the new code additions. First, instead of
responding saying we don't understand, we're going to call LUIS. So we call LUIS using the
LUIS Recognizer, and we store the Top Intent in a variable. We then use switch to respond
in different ways, depending on which intent is picked up. This is almost identical to what
we did with Regex.

Note If you named your intents differently in LUIS than instructed in the dode
accompanying Lab 6, you need to modify the case statements accordingly.
Another thing to note is that after every response that called LUIS, we're adding the LUIS
intent value and score. The reason is just to show you when LUIS is being called as opposed
to Regex (you would remove these responses from the final product, but it's a good
indicator for us as we test the bot).

46. Lab 7: Integrate LUIS into Bot Dialogs


Lab 7.3:  Testing natural speech phrases

1. Press F5 to run the app.

2. Switch to your Bot Emulator. Try sending the bots different ways of searching pictures. What
happens when you say "send me pictures of water" or "show me dog pics"? Try some other ways
of asking for, sharing and ordering pictures.

If you have extra time, see if there are things LUIS isn't picking up on that you expected it to.
Maybe now is a good time to go to luis.ai, review your endpoint utterances
(https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/label-
suggested-utterances), and retrain/republish your model.

Fun Aside: Reviewing the endpoint utterances can be extremely powerful. LUIS makes smart
decisions about which utterances to surface. It chooses the ones that will help it improve the
most to have manually labeled by a human-in-the-loop. For example, if the LUIS model
predicted that a given utterance mapped to Intent1 with 47% confidence and predicted that
it mapped to Intent2 with 48% confidence, that is a strong candidate to surface to a human
to manually map, since the model is very close between two intents.
Going Further

If you're having trouble customizing your LUIS implementation, review the documentation
guidance for adding LUIS to bots here (https://docs.microsoft.com/en-us/azure/bot-
service/bot-builder-howto-v4-luis?view=azure-bot-service-4.0&tabs=cs ).
Get stuck or broken? You can find the solution for the lab up until this point under
code/Finished (./code/Finished). You will need to insert the keys for your Azure Bot Service
in the appsettings.json file. We recommend using this code as a reference, not as a
solution to run, but if you choose to run it, be sure to add the necessary keys (in this section,
there shouldn't be any).

Extra Credit

If you wish to attempt to integrate LUIS bot including Azure Search, building on the prior
supplementary LUIS model-with-search training (https://github.com/Azure/LearnAI-
Bootcamp/tree/master/lab01.5-luis ), follow the following trainings: Azure Search
(https://github.com/Azure/LearnAI-Bootcamp/tree/master/lab02.1-azure_search ), and
Azure Search Bots (https://github.com/Azure/LearnAI-Bootcamp/blob/master/lab02.2-
building_bots/2_Azure_Search.md ).

47. Lab 8 - Detect Language


Lab 8.1:  Retrieve your Cognitive Services url and keys

1. Open the Azure Portal (https://portal.azure.com)

2. Navigate to your resource group, select the cognitive services resource that is generic (aka, it
contains all end points).

3. Under RESOURCE MANAGEMENT, select the Quick Start tab and record the url and the
key for the cognitive services resource
48. Lab 8 - Detect Language
Lab 8.2:  Add language support to your bot

1. If not already open, open your PictureBot solution

2. Right-click the project and select Manage Nuget Packages

3. Click Browse

4. Search for Microsoft.Azure.CognitiveServices.Language.TextAnalytics , select it then


click Install, then click I Accept

5. Open the Startup.cs file, add the following using statements:

Paste Content 

using Microsoft.Azure.CognitiveServices.Language.TextAnalytics;
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics.Models;
using Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime;

6. Add the following code to the ConfigureServices method:

Paste Content 

services.AddSingleton(sp =>
{
string cogsBaseUrl = Configuration.GetSection("cogsBaseUrl")?.Value;
string cogsKey = Configuration.GetSection("cogsKey")?.Value;
var credentials = new ApiKeyServiceClientCredentials(cogsKey);
TextAnalyticsClient client = new TextAnalyticsClient(credentials)
{
Endpoint = cogsBaseUrl
};

return client;
});

7. Open the PictureBot.cs file, add the following using statements:

Paste Content 

using Microsoft.Azure.CognitiveServices.Language.TextAnalytics;
using Microsoft.Azure.CognitiveServices.Language.TextAnalytics.Models;

8. Add the following class variable:

Paste Content 

private TextAnalyticsClient _textAnalyticsClient;

9. Modify the constructor to include the new TextAnalyticsClient:

Paste Content 

public PictureBot(PictureBotAccessors accessors, ILoggerFactory


loggerFactory,LuisRecognizer recognizer, TextAnalyticsClient analyticsClient)

10. Inside the constructor, initialize the class variable:

Paste Content 
_textAnalyticsClient = analyticsClient;

11. Navigate to the OnTurnAsync method and find the following line of code:

Paste Content 

var utterance = turnContext.Activity.Text;


var state = await _accessors.PictureState.GetAsync(turnContext, () => new
PictureState());
state.UtteranceList.Add(utterance);
await _accessors.ConversationState.SaveChangesAsync(turnContext);

12. Add the following line of code after it

Paste Content 

//Check the language


var result = _textAnalyticsClient.DetectLanguage(turnContext.Activity.Text);

switch (result.DetectedLanguages[0].Name)
{
case "English":
break;
default:
//throw error
await turnContext.SendActivityAsync($"I'm sorry, I can only understand
English. [{result.DetectedLanguages[0].Name}]");
return;
break;
}

13. Open the appsettings.json file and ensure that your cognitive services settings are entered:
Paste Content 

"cogsBaseUrl": "",
"cogsKey" : ""

14. Press F5 to start your bot

15. Using the Bot Emulator, send in a few phrases and see what happens:

 Como Estes?
 Bon Jour!
 Привет
 Hello

Going Further
Since we have already introduced you to LUIS in previous labs, think about what changes
you may need to make to support multiple languages using LUIS. Some helpful articles:

 Language and region support for LUIS (https://docs.microsoft.com/en-


us/azure/cognitive-services/luis/luis-language-support)

Resources
 Example: Detect language with Text Analytics (https://docs.microsoft.com/en-
us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-
language-detection)
 Quickstart: Text analytics client library for .NET (https://docs.microsoft.com/en-
us/azure/cognitive-services/text-analytics/quickstarts/csharp)

49. Lab 9: Test Bots in DirectLine


Introduction:

Communication directly with your bot may be required in some situations. For example, you
may want to perform functional tests with a hosted bot. Communication between your bot
and your own client application can be performed using the Direct Line
API(https://docs.microsoft.com/en-us/bot-framework/rest-api/bot-framework-rest-
direct-line-3-0-concepts).

Additionally, sometimes you may want to use other channels or customize your bot further.
In that case, you can use Direct Line to communicate with your custom bots.

Microsoft Bot Framework Direct Line bots are bots that can function with a custom client of
your own design. Direct Line bots are similar to normal bots. They just don't need to use the
provided channels. Direct Line clients can be written to be whatever you want them to be.
You can write an Android client, an iOS client, or even a console-based client application.

This hands-on lab introduces key concepts related to Direct Line API.

50. Lab 9: Test Bots in DirectLine


Lab 9.0:  Prerequisites

This lab starts from the assumption that you have built and published the bot from Lab 3. It
is recommended that you do that lab in order to be successful in the ones that follow. If you
have not, reading carefully through all the exercises and looking at some of the code or
using it in your own applications may be sufficient, depending on your needs.

We'll also assume that you've completed Lab 4, but you should be able to complete the
labs without completing the logging labs.

Collecting the Keys

Over the course of the last few labs, we have collected various keys. You will need most of
them if you are using the starter project as a starting point.

Keys

 Cognitive Services API Url:


 Cognitive Services API key:
 LUIS API Endpoint:
 LUIS API Key:
 LUIS API App ID:
 Bot App Name:
 Bot App ID:
 Bot App Password:
 Azure Storage Connection String:
 Cosmos DB Url:
 Cosmos DB Key:
 DirectLine Key:

Ensure that you update the appsettings.json file with all the necessary values

51. Lab 9: Test Bots in DirectLine


Lab 9.1:  Publish Your Bot

1. Open your PictureBot solution

2. Right-click the project and select Publish

3. In the publish dialog, select Select Existing, then click Publish

4. If prompted, login using the account you have used through the labs

5. Select the subscription you have been using

6. Expand the resource group and select the picture bot app service we created in Lab 3

7. Click OK

Note Depending on the path you took to get to this lab, you may need to publish a second
time otherwise you may get the echo bot service otherwise when you test below. Republish
the bot, only this time change the publish settings to remove existing files.
52. Lab 9: Test Bots in DirectLine
Lab 9.2: Setting up the Direct Line channel

1. In the portal, locate your published PictureBot web app bot and navigate to


the Channels tab.

2. Select the Direct Line icon (looks like a globe). You should see a Default Site displayed.

3. For the Secret keys, click Show and store one of the secret keys in Notepad, or wherever you
are keeping track of your keys.

You can read detailed instructions on enabling the Direct Line channel
(https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-
directline?view=azure-bot-service-4.0 ) and secrets and tokens
(https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-
direct-line-3-0-authentication?view=azure-bot-service-3.0#secrets-and-tokens ).

53. Lab 9: Test Bots in DirectLine


Lab 9.3:  Create a console application

We'll create a console application to help us understand how Direct Line can allow us to
connect directly to a bot. The console client application we will create operates in two
threads. The primary thread accepts user input and sends messages to the bot. The
secondary thread polls the bot once per second to retrieve any messages from the bot, then
displays the messages received.

Note The instructions and code here have been modified from the best practices in the
documentation (https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-
direct-line?view=azure-bot-service-4.0&tabs=cscreatebot%2Ccsclientapp%2Ccsrunclient ).
1. If not already open, open your PictureBot solution in Visual Studio.

2. Right-click on the solution in Solution Explorer, then select Add > New Project.

3. Search for Console App (.NET Core), select it and click Next

4. For the name, type PictureBotDL

5. Select Create.

Step 2 - Add NuGet packages to PictureBotDL

6. Right-click on the PictureBotDL project and select Manage NuGet Packages.

7. Within the Browse tab , search and install/update the following:

 Microsoft.Bot.Connector.DirectLine
 Microsoft.Rest.ClientRuntime

8. Open Program.cs

9. Replace the contents of Program.cs (within PictureBotDL) with the following:

Paste Content 
using System;
using System.Diagnostics;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Bot.Connector.DirectLine;
using Newtonsoft.Json;
using Activity = Microsoft.Bot.Connector.DirectLine.Activity;

namespace PictureBotDL
{
class Program
{
// ************
// Replace the following values with your Direct Line secret and the name of
your bot resource ID.
//*************
private static string directLineSecret = "YourDLSecret";
private static string botId = "YourBotServiceName";

// This gives a name to the bot user.


private static string fromUser = "PictureBotSampleUser";

static void Main(string[] args)


{
StartBotConversation().Wait();
}

/// <summary>
/// Drives the user's conversation with the bot.
/// </summary>
/// <returns></returns>
private static async Task StartBotConversation()
{
// Create a new Direct Line client.
DirectLineClient client = new DirectLineClient(directLineSecret);

// Start the conversation.


var conversation = await client.Conversations.StartConversationAsync();

// Start the bot message reader in a separate thread.


new System.Threading.Thread(async () => await
ReadBotMessagesAsync(client, conversation.ConversationId)).Start();

// Prompt the user to start talking to the bot.


Console.Write("Conversation ID: " + conversation.ConversationId +
Environment.NewLine);
Console.Write("Type your message (or \"exit\" to end): ");

// Loop until the user chooses to exit this loop.


while (true)
{
// Accept the input from the user.
string input = Console.ReadLine().Trim();

// Check to see if the user wants to exit.


if (input.ToLower() == "exit")
{
// Exit the app if the user requests it.
break;
}
else
{
if (input.Length > 0)
{
// Create a message activity with the text the user entered.
Activity userMessage = new Activity
{
From = new ChannelAccount(fromUser),
Text = input,
Type = ActivityTypes.Message
};

// Send the message activity to the bot.


await
client.Conversations.PostActivityAsync(conversation.ConversationId, userMessage);
}
}
}
}

/// <summary>
/// Polls the bot continuously and retrieves messages sent by the bot to the
client.
/// </summary>
/// <param name="client">The Direct Line client.</param>
/// <param name="conversationId">The conversation ID.</param>
/// <returns></returns>
private static async Task ReadBotMessagesAsync(DirectLineClient client,
string conversationId)
{
string watermark = null;

// Poll the bot for replies once per second.


while (true)
{
// Retrieve the activity set from the bot.
var activitySet = await
client.Conversations.GetActivitiesAsync(conversationId, watermark);
watermark = activitySet?.Watermark;

// Extract the activies sent from our bot.


var activities = from x in activitySet.Activities
where x.From.Id == botId
select x;

// Analyze each activity in the activity set.


foreach (Activity activity in activities)
{
// Display the text of the activity.
Console.WriteLine(activity.Text);

// Are there any attachments?


if (activity.Attachments != null)
{
// Extract each attachment from the activity.
foreach (Attachment attachment in activity.Attachments)
{
switch (attachment.ContentType)
{
// Display a hero card.
case "application/vnd.microsoft.card.hero":
RenderHeroCard(attachment);
break;

// Display the image in a browser.


case "image/png":
Console.WriteLine($"Opening the requested image
'{attachment.ContentUrl}'");
Process.Start(attachment.ContentUrl);
break;
}
}
}

// Wait for one second before polling the bot again.


await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
}
}

/// <summary>
/// Displays the hero card on the console.
/// </summary>
/// <param name="attachment">The attachment that contains the hero
card.</param>
private static void RenderHeroCard(Attachment attachment)
{
const int Width = 70;
// Function to center a string between asterisks.
Func<string, string> contentLine = (content) => string.Format($"{{0, -
{Width}}}", string.Format("{0," + ((Width + content.Length) / 2).ToString() + "}",
content));

// Extract the hero card data.


var heroCard =
JsonConvert.DeserializeObject<HeroCard>(attachment.Content.ToString());

// Display the hero card.


if (heroCard != null)
{
Console.WriteLine("/{0}", new string('*', Width + 1));
Console.WriteLine("*{0}*", contentLine(heroCard.Title));
Console.WriteLine("*{0}*", new string(' ', Width));
Console.WriteLine("*{0}*", contentLine(heroCard.Text));
Console.WriteLine("{0}/", new string('*', Width + 1));
}
}
}
}

Note this code was slightly modified from the documentation


(https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-howto-direct-line?
view=azure-bot-service-4.0&tabs=cscreatebot%2Ccsclientapp%2Ccsrunclient#create-the-
console-client-app) to include a few things we'll use in the next lab sections.

10. In Program.cs, update the direct line secret and bot id to your specific values.

Spend some time reviewing this sample code. It's a good exercise to make sure you
understand how we're connecting to our PictureBot and getting responses.

Step 4 - Run the app

11. Right-click on the PictureBotDL project and select Set as Startup Project.

12. Next, press F5 to run the app

13. Have a conversation with the bot using the commandline application

Note If you do not get a response, it is likely you have an error in your bot. Test your bot
locally using the Bot Emulator, fix any issues, then republish it.

Quick quiz - how are we displaying the Conversation ID? We'll see in the next sections why
we might want this.
54. Lab 9: Test Bots in DirectLine
Lab 9.4:  Using HTTP Get to retrieve messages

Because we have the conversation ID, we can retrieve user and bot messages using HTTP
Get. If you are familiar and experienced with Rest Clients, feel free to use your tool of choice.

In this lab, we will go through using Postman (a web-based client) to retrieve messages.

Step 1 - Download Postman

1. Download the native app for your platform(https://www.getpostman.com/apps). You may


need to create a simple account.

Step 2
Using Direct Line API, a client can send messages to your bot by issuing HTTP Post requests.
A client can receive messages from your bot either via WebSocket stream or by issuing
HTTP GET requests. In this lab, we will explore HTTP Get option to receive messages.

We'll be making a GET request. We also need to make sure the header contains our header
name (Authorization) and header value (Bearer YourDirectLineSecret). Addtionally, we'll
make the call to our existing conversation in the console app, by replaceing {conversationId}
with our current Conversation ID in our
request: https://directline.botframework.com/api/conversations/{conversationId}/messag
es.

Postman makes this very easy for us to configure:

 Change the request to type "GET" using the drop down.


 Enter your request URL with your Conversation ID.
 Change the Authorization to type "Bearer Token" and enter your Direct Line Secret key in the
"Token" box.

2. Open Postman
3. For the type, ensure that GET is selected

4. For the url,


type https://directline.botframework.com/api/conversations/{conversationId}/mes
sages. Be sure to replace the converstationId with your specific converstation id

5. Click Authorization, for the type, select Bearer Token

6. Set the value to {Your Direct Line Secret}

7. Finally, select Send.

8. Inspect your results.

9. Create a new conversation with your console app and be sure to search for pictures.

10. Create a new request using Postman using the new converstation id.

11. Inspect the response returned. You should find the image url displayed within the images array of
the response

Going Further
Have extra time? Can you leverage curl download
link: https://curl.haxx.se/download.html from the terminal to retrieve conversations (like
you did for Postman)?
Hint: your command may look similar to curl -H "Authorization:Bearer {SecretKey}"
https://directline.botframework.com/api/conversations/{conversationId}/messages -XGET

Resources
 Direct Line API(https://docs.microsoft.com/en-us/bot-framework/rest-api/bot-
framework-rest-direct-line-3-0-concepts)

You might also like